The Power of Cinema

June 28, 2020 · 0 min · alexchen

Raspberry Pi K3s NFS FS

NFS on Raspberry Pi 目前正学习如何在树莓派4集群上结合gromacs+mpich的分子动力学模拟,所以需要搭建一款分布式存储系统,又由于树莓派性能的限制,搭建OpenEBS会比较浪费计算资源,最后就采用轻量级的NFS来完成文件系统的共享和存储。 NFS on K3s aarch64 基础依赖 NFS Server: sudo apt install nfs-kernel-server 添加共享文件目录到/etc/exports: /mnt/data/nfs 192.168.1.0/24(rw,sync,no_root_squash,no_subtree_check) # 外部使用 /mnt/data/kubedata 192.168.1.0/24(rw,sync,no_root_squash,no_subtree_check) # k3s 使用 启动 NFS Server: sudo systemctl restart nfs-kernel-server NFS Client: sudo apt install nfs-common sudo mount 192.168.1.145:/mnt/data/nfs ./nfs 添加到/etc/fstab: 192.168.1.145:/mnt/data/nfs /home/chenfeng/nfs nfs auto,nofail,noatime,nolock 0 0 K3s NFS Volume 需要使用Kubernets提供的NFS Client Provisioner k3s是跑在Ubuntu 20.04 aarch64 系统上的,而官方提供的NFS Client Provisioner是基于ARM v7的,所以需要重新编译Provisioner。 以下的操作全部在Raspberry pi 4上完成的。 以下为PATH: diff -ur ./Makefile /tmp/nfs-client/Makefile --- ./Makefile 2020-06-28 07:47:43.883181030 +0000 +++ /tmp/nfs-client/Makefile 2020-06-28 10:06:00.588586966 +0000 @@ -28,17 +28,16 @@ container: build image build_arm image_arm build: - CGO_ENABLED=0 GOOS=linux go build -a -ldflags '-extldflags "-static"' -o docker/x86_64/nfs-client-provisioner ./cmd/nfs-client-provisioner + go build -a -ldflags '-extldflags "-static"' -o docker/x86_64/nfs-client-provisioner ./cmd/nfs-client-provisioner build_arm: - CGO_ENABLED=0 GOOS=linux GOARCH=arm GOARM=7 go build -a -ldflags '-extldflags "-static"' -o docker/arm/nfs-client-provisioner ./cmd/nfs-client-provisioner + go build -a -ldflags '-extldflags "-static"' -o docker/arm/nfs-client-provisioner ./cmd/nfs-client-provisioner image: docker build -t $(MUTABLE_IMAGE) docker/x86_64 docker tag $(MUTABLE_IMAGE) $(IMAGE) image_arm: - docker run --rm --privileged multiarch/qemu-user-static:register --reset docker build -t $(MUTABLE_IMAGE_ARM) docker/arm docker tag $(MUTABLE_IMAGE_ARM) $(IMAGE_ARM) Only in /tmp/nfs-client/deploy: .deployment-arm.yaml.swp diff -ur ./deploy/deployment-arm.yaml /tmp/nfs-client/deploy/deployment-arm.yaml --- ./deploy/deployment-arm.yaml 2020-06-28 09:24:48.499572298 +0000 +++ /tmp/nfs-client/deploy/deployment-arm.yaml 2020-06-28 10:06:11.876117004 +0000 @@ -4,7 +4,8 @@ name: nfs-client-provisioner labels: app: nfs-client-provisioner - namespace: nfs + # replace with namespace where provisioner is deployed + namespace: default spec: replicas: 1 strategy: @@ -20,7 +21,7 @@ serviceAccountName: nfs-client-provisioner containers: - name: nfs-client-provisioner - image: 192.168.1.114:5000/nfs-client-provisioner-arm + image: quay.io/external_storage/nfs-client-provisioner-arm:latest volumeMounts: - name: nfs-client-root mountPath: /persistentvolumes @@ -28,11 +29,11 @@ - name: PROVISIONER_NAME value: fuseim.pri/ifs - name: NFS_SERVER - value: 192.168.1.145 + value: 10.10.10.60 - name: NFS_PATH - value: /mnt/data/kubedata + value: /ifs/kubernetes volumes: - name: nfs-client-root nfs: - server: 192.168.1.145 - path: /mnt/data/kubedata + server: 10.10.10.60 + path: /ifs/kubernetes diff -ur ./deploy/test-pod.yaml /tmp/nfs-client/deploy/test-pod.yaml --- ./deploy/test-pod.yaml 2020-06-28 09:36:06.872994438 +0000 +++ /tmp/nfs-client/deploy/test-pod.yaml 2020-06-28 10:06:11.920115180 +0000 @@ -5,7 +5,7 @@ spec: containers: - name: test-pod - image: 192.168.1.114:5000/ubuntu:20.04 + image: gcr.io/google_containers/busybox:1.24 command: - "/bin/sh" args: diff -ur ./docker/arm/Dockerfile /tmp/nfs-client/docker/arm/Dockerfile --- ./docker/arm/Dockerfile 2020-06-28 07:47:43.759177680 +0000 +++ /tmp/nfs-client/docker/arm/Dockerfile 2020-06-28 10:06:00.628585293 +0000 @@ -12,7 +12,6 @@ # See the License for the specific language governing permissions and # limitations under the License. -FROM hypriot/rpi-alpine:3.6 -RUN apk update --no-cache && apk add ca-certificates +FROM ubuntu:20.04 COPY nfs-client-provisioner /nfs-client-provisioner ENTRYPOINT ["/nfs-client-provisioner"] nfs-client/cmd/nfs-client-provisioner为Provisioner源码目录。 ...

June 28, 2020 · 3 min · alexchen

Rosetta Project Update 2020 06 26

The Rosetta@home project computationally generated and used in animal experiments an immune protein for SARS-COV-2 that could serve to protect animals from a lethal new coronavirus, which is still in the experimental phase and is being optimized. Since the beginning of April, I have run 35 Raspberry Pi 4Gs (140 cores, 280W total power consumption) to join the Rosetta@home project, and I hope to have more new results about this research project in the next six months, so it will be worth the cost of electricity and hardware. ...

June 26, 2020 · 1 min · alexchen

Blender Art of Creating Protein 3D Structures 1

What is Blender Blender is an open source 3D modeling program, which I have been learning in order to use digital painting MattePainting techniques in creating short films, such as asset creation and 2.5D compositing in scene widening, and the corresponding commercial modeling programs are Maya, 3DCoat, 3dsMax, SideFx, Houdini. The advantage of Blender over commercial modeling programs is that first, it is an open source program, and second, the whole modeling system has a self-contained workflow, from modeling, mapping, bone binding, animation, compositing, and rendering output, all can be done in Blendr. The current version of Blender has been in continuous iteration, and the whole system has started to use more and more advanced technologies, such as the use of EEVEE real-time rendering engine, which reduces production costs when making and rendering animations. ...

June 21, 2020 · 3 min · alexchen

边缘领域的ML

本文用到的Mathematica notebook源码可以在这里下载 为什么写这篇文章 从目前的市场环境看,有很多大型公司都在提倡人工智能的发展,然而绝大部分我们能看到的AI的应用就是智能小车,自动驾驶,图像识别,行为预测,医疗辅助,智能推荐系统,语音识别,图像与语音生成,在这些应用中,很少会与大规模工业领域有交集,原因就在于机器学习运算的成本。 首先像工业领域的机器学习的推演预测,第一个要保证就是数据安全性和实时性,在介绍Coral Dev Board本地机器学习的文章中,我们可以看到本地机器学习的应用范围,在工业范围内的AI应用中,我们可以看出占比很少,像自动驾驶,医疗辅助可以算是在工业领域的初探,大部分的人工智能应用都是停留在一个概念上,很少出现有能应用在工业领域并且具有商业性质的项目,原因就是在安全性和低成本。 试想你开发了一个ML应用,如果要应用到工业领域,比如检测设备故障,预测机器运行的下一个状态,公司盈利与设备故障率及工人流动情况的关系,在部署这些模型时,我们要考虑在工业方面的苛刻要求,比如这个工厂基本上没有覆盖网络,那么你要去部署检测设备故障模型的这一个环节就会遇到问题,是采用本地集群部署,还是部署一个终端,然后终端的数据由人为来进行录入,在采用集群部署时,因为会有成千上万台机器设备,所以部署的节点也会有很大的成本,像部署模型在一台Jetson Nano的设备上,虽然成本已经很低,但是部署成千上万个Jetson Nano,你就要考虑功耗,考虑nano模块的故障率以及模块更换的成本,还需要考虑模块与模块之间的高可用网络成本。那么能否将模型部署到更低廉的设备上,比如像微控制器的单片机,这样可以降低很多成本,并且功耗很低,对于大型公司采购此方案的成功率就会有很大的提升。 接下来的5-10年,或许就是机器学习在工业领域的兴起,TinyML意指在小型的微控制器上运算和推演模型,它可以运行在像我们平时接触的家电设备的微控制器上,可以运行在很小的集成电路里面和无处不在的小型设备上。 ML在工业领域的展望,比如像电影领域,通过小型穿戴设备进行演员的动作捕捉计算,目前购买一套动作捕捉的设备是相当昂贵的,还有电影中故事板的制作,对于独立电影人来说将是福利,通过运行有生成模型的小型设备,独立电影人可以通过所处的环境,来生成三维结构的环境,然后导入模型人物来进行预演,最后生成一张张故事板,提高了整个独立创作的效率,也让演员能够更好的理解现场的氛围和如何更好的表演。当然要让ML去创作出好听的小提琴音乐,那是相当难的,更不用说拉小提琴🎻,因为有灵魂注入到了琴中😄。 TinyML的工作流程 确定目标 收集数据 针对实际场景设计神经网络架构 训练模型 转换和部署模型 排查运行过程中的错误问题 实例 拟合Sin(x)函数,在给定x的情况下预测出Sin(x)的值 Mathematica 原型设计: data = Table[ x -> Sin[x] + RandomVariate[NormalDistribution[0, .15]] , {x, 0, 2 \[Pi], .001}]; ListPlot[List @@@ data, PlotStyle -> Dashed] data = RandomSample[data]; trainData = Take[data, {1, Floor[0.6*Length[data]]}]; validationData = Take[data, {Floor[0.6*Length[data]] + 1, Floor[0.8*Length[data]]}]; testData = Take[data, {Floor[0.8*Length[data]] + 1, Length[data]}]; Total[Length /@ {trainData, validationData, testData}] - Length[data] Length /@ {trainData, validationData, testData} 模型1: ...

June 16, 2020 · 7 min · alexchen