Creating a GPU Accelerated Deep-Learning Environment on Arch Linux

This article logs a weekend of efforts to create a deep-learning environment which meets the following criteria

  • GPU Enabled
  • On Arch Linux
  • Uses Keras with Tensorflow as a backend
  • Main IDE being RStudio

It was a tough one.

TL;DR

There was error I had a hell of a time debugging. Installing the toolchain is fairly straightforward, except CUDA. At the time of writing this article (2018-04-29), there is a version mismatch between CUDA and CUDNN in the Arch Linux repositories.

This results in an the following error every time I tried to import tensorflow in Python.

ImportError: libcublas.so.9.0: cannot open shared object file: No such file or directory

The Arch Linux package CUDA was pulling the latest version 9.1.1 (at writing) and the Arch Linux package CUDNN was looking for version 9.0. That little mismatch cost me 10 hours.

0. Other Arch Linux Deep-Learning Articles

There are a couple other Arch Linux deep-learning setup walkthroughs. Definitely need to give these guys credit, they are smarter than me. However, neither walkthrough had everything I was looking for.

This article was alright. But it focused a lot on preparing Arch Linux from the bare metal, which is usually the right idea with Arch, if you are on a resource budget. For example, running on a server or Raspberry Pi. But the extra few bytes of RAM saved doesn’t really justify the time spent on meticulous tunning when we will be talking in megabytes and not bytes. And let my immolation begin.

Also, this article doesn’t include information on GPU support. Whaawhaa.

This one was a bit closer to what I need. In fact, I did use the middle part. However, the mismatch was not mentioned. Of course, it’s not the author’s fault. At the time he wrote it I’m guessing the repositories matched.

Alright, on to my attempt.

1. Install Antergos (Arch Linux)

I love me some Arch Linux. It’s lightweight and avoids the long-term issues of other flavors. Plus, it is meant to be headless, so it’s great for embedded projects. Given how many embedded projects I take on it made me accustomed to using daily, eventually, I made it my main desktop flavor. It should sound too Linux-snobby, though, I dual-boot it on my Mac Book Pro. The one issue with Arch Linux is it can be a little unfriendly to new users–or those with limited time and cannot be bothered with the nuances of setup. Enter Antergos.

Antergos is essentially Arch Linux with a desktop environment and a GUI installer. A perfect choice for my deep-learning endeavors. Really, you should check it out. Go now.

We’re going to use it for this project.

Download the iso file

You’ll need a little jumpdrive, 4gb should work.

I use Etcher as it makes it painless to create boot media.

After Etcher does its thing, insert the jumpdrive, open Etcher, and then select the Antergos iso file. Here’s the usual warning, if you have anything on your jumpdrive it’s about to get deleted forever.

Insert the media into the machine you want to install Arch on and boot from the jumpdrive.

Windows

You will need to hit a special key during the boot sequence to enter the BIOS’ boot menu

Mac

While booting hold down the Option key.

If all goes well you should see a menu which says

Welcome to GRUB!

And then shows an Antergos boot menu. Select boot Antergos Live.

Once the boot sequence is finished you should see the Antergos desktop environment start and shortly after cnchi, which is Antergos’ GUI installer

Select Install It. The installer is fairly self explantory. But, if you run in to any issues, please feel free to ask me questions in the comments. I’m glad to help.

Once the installer is complete you will be prompted to restart the computer. It’s go time.

2. Install NVIDIA

When you boot up the installed Antergos open the terminal.

We will start with installing the base NVIDIA packages. As part of it, we are going to get the wrong version of CUDA. But, I found downloading the NVIDIA as whole packages and then replacing CUDA with an earlier version, much eaiser than trying to pull everything together myself.

Ok, here we go.

sudo pacman -S nvidia nvidia-utils cuda cdnn

That might take awhile.

So, how you been? Oh–wait, it’s done.

Ok, to initialize the changes reboot.

sudo reboot now

3. Downgrade CUDA to match CDNN

That should have gotten everything at once. Now, let’s downgrade CUDA from 9.1 to 9.0.

wget https://archive.archlinux.org/packages/c/cuda/cuda-9.0.176-4-x86_64.pkg.tar.xz

This downloads a pkg file for CUDA 9.0, which is what the most recent version of Tensorflow is expecting (at this time, 1.8). I found the easiest way to replace CUDA 9.1 with 9.0 to simply double click on the file we downloaded from the GUI file browser. This opens it in Antergos’ answer to a GUI based package manager. It will warn you this package will downgrade your CUDA version and ask you to Commit to the changes. Hit the commit button.

Wait for the file to be replaced before moving on.

4. Anaconda (Optional)

Anaconda is a great package manager for data (mad) scientist tools. It is Python centric, but also supports R and other stuff I don’t know how to use yet.

We will be using it to prepare our system to support deep-learning projects.

Download the Linux version suited for your computer.

Once the file is downloaded right click on the file and select Show In Folder. Once there, right-click in the open space and select Open in Terminal.

Make Anaconda executable and then run it.

chmod +x Anaconda3-5.1.0-Linux-x86_64.sh
./Anaconda3-5.1.0-Linux-x86_64.sh

The Anaconda installtion is off and running. It will ask you to agree to a form. After, it will ask whether you want to install Anaconda in its default directory. We do.

Now, it will install every data scientist package known to existance. Mwhahaa. Erm.

When it asks

Do you wish the installer to prepend the Anaconda3 install location
to PATH in your /home/ladvien/.bashrc ? [yes|no]

Type yes. This will make Anaconda accessible throughout your system.

Of course, this new path variable will not be loaded until you start your user session again (log off and back on). But we can force it to load by typing.

cd ~
source ./bash_profile

Double check we are using the Anaconda version of Python.

[ladvien@ladvien ~]$ which python
/home/ladvien/anaconda3/bin/python

If it doesn’t refer to anaconda somewhere in this path, then we need to fix that. Let me know in the comments below and I’ll walk you through correcting it.

If it does, then let’s move forward!

6. Tensorflow and Keras

Alright, almost done.

Let’s go back to the command prompt and type:

sudo pacman -S python-pip

This will download Python’s module download manager pip. This is usually packaged with Python, but isn’t included on Arch.

How’d we get Python? Anaconda installed it.

Let’s download Tensorflow with GPU support.

sudo pip install tensorflow-gpu --upgrade --ignore-installed

Let’s test and see if it’s worked. At command prompt type

python

And in Python

import tensorflow as tf
sess = tf.Session(config=tf.ConfigProto(log_device_placement=True))

You should a response similar to

2018-05-01 05:25:25.929575: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1356] Found device 0 with properties:
name: GeForce GTX 1060 6GB major: 6 minor: 1 memoryClockRate(GHz): 1.7715
pciBusID: 0000:01:00.0
totalMemory: 5.93GiB freeMemory: 5.66GiB
2018-05-01 05:25:25.929619: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1435] Adding visible gpu devices: 0
2018-05-01 05:25:26.333292: I tensorflow/core/common_runtime/gpu/gpu_device.cc:923] Device interconnect StreamExecutor with strength 1 edge matrix:
2018-05-01 05:25:26.333346: I tensorflow/core/common_runtime/gpu/gpu_device.cc:929]      0
2018-05-01 05:25:26.333356: I tensorflow/core/common_runtime/gpu/gpu_device.cc:942] 0:   N
2018-05-01 05:25:26.333580: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1053] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 5442 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1060 6GB, pci bus id: 0000:01:00.0, compute capability: 6.1)
Device mapping:
/job:localhost/replica:0/task:0/device:GPU:0 -> device: 0, name: GeForce GTX 1060 6GB, pci bus id: 0000:01:00.0, compute capability: 6.1
2018-05-01 05:25:26.455082: I tensorflow/core/common_runtime/direct_session.cc:284] Device mapping:
/job:localhost/replica:0/task:0/device:GPU:0 -> device: 0, name: GeForce GTX 1060 6GB, pci bus id: 0000:01:00.0, compute capability: 6.1

Which means you are good to go! At this point, Python is setup to do accelerated deep-learning. Most deep-learning peeps stop here, as Python is the deep-learning language. However, like a pirate I’m an R sort of guy.

7. Installing R and RStudio

To setup a GPU accelerated deep-learning environment in R there isn’t a lot of additional setup. There are keras and tensorflow R packages, which connect the R code to a Python backend.

To get R in Arch Linux open the terminal and type:

sudo pacman -S r

And what’s R without RStudio? Actually, it’s still R, which is bad-ass unto itself–but anyway, let’s not argue. Time to download RStudio…because you insist.

In terminal

cd ~
git clone https://aur.archlinux.org/rstudio-desktop-bin.git
cd rstudio-desktop-bin
makepkg -i

After, you should find RStudio in the Antergos Menu.

You can right click on the icon and click Add to Panel to make a shortcut.

Open up RStudio and lets finish this up.

8. R Packages for Deep Learning

Inside RStudio’s code console type

install.packages("tensorflow")

This will install the package which will help the R environment find the Tensorflow Python modules.

Then,

install.packages("keras")

Keras is the boss package, it’s going to connect all the Python modules needed to Tensorflow for us to focus on just the high-level deep-learning tuning. It’s awesome.

Once the keras package is installed, we need to load it and connect it to the unerlying infrastructure we setup.

library(keras)
install_keras(method = "conda", tensorflow = "gpu")

This will install the underlying Keras packages using the Anaconda ecosystem and Tensorflow Python modules using CUDA and CUDDN. Note, a lot of this we setup manually, so it should report the needed modules are already there. However, this step is still needed to awaken R to the fact those modules exist.

Alright, moment of truth. Let’s run this code in R.

library(tensorflow)

with(tf$device("/gpu:0"), {
  const <- tf$constant(42)
})

sess <- tf$Session()
sess$run(const)

If all went well, it should provide you with a familiar output

> library(tensorflow)
>
> with(tf$device("/gpu:0"), {
+   const <- tf$constant(42)
+ })
/home/dl/.virtualenvs/r-tensorflow/lib/python3.6/site-packages/h5py/__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.
  from ._conv import register_converters as _register_converters
>
> sess <- tf$Session()
2018-05-01 05:55:07.412011: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1356] Found device 0 with properties:
name: GeForce GTX 1060 6GB major: 6 minor: 1 memoryClockRate(GHz): 1.7715
pciBusID: 0000:01:00.0
totalMemory: 5.93GiB freeMemory: 5.38GiB
2018-05-01 05:55:07.412057: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1435] Adding visible gpu devices: 0
2018-05-01 05:55:07.805042: I tensorflow/core/common_runtime/gpu/gpu_device.cc:923] Device interconnect StreamExecutor with strength 1 edge matrix:
2018-05-01 05:55:07.805090: I tensorflow/core/common_runtime/gpu/gpu_device.cc:929]      0
2018-05-01 05:55:07.805115: I tensorflow/core/common_runtime/gpu/gpu_device.cc:942] 0:   N
2018-05-01 05:55:07.805348: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1053] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 5150 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1060 6GB, pci bus id: 0000:01:00.0, compute capability: 6.1)
> sess$run(const)
[1] 42

9. Scream Hello World

And the payoff?

Using the prepared Deep Dream script from the Keras documentation

Voila!

Google Vision API using Raspberry Pi and Node

This is a jumpstart guide to connecting a Raspberry Pi Zero W to the Google Vision API.

1. Get an Account

Sadly, Google Vision API is not a completely free service. At the time of writing an API account provides 1000 free Google Vision API calls a month. Then, it’s a $1.00 for each 1000 calls.

I know, I know, not too bad. But this isn’t a commercial project. I’m wanting to use it for a puttering little house bot. If my wife gets a bill for $40 because I decided to stream images to the API, well, it’ll be a dead bot. Anyway, I thought I’d still explore the service for poo-and-giggles.

To get an account visit

And sign-in with an existing Google account or create one.

2. Enter Billing Information

Now, here’s the scary part, you’ve must enter your billing information before getting going. Remember, you will be charged if you go over 1000 calls.

Again, if you exceed your 1,000 free calls you will be charged. (What? I said that already? Oh.)

2. Enable Cloud Vision API

After setting up billing information we still need to enable the Cloud Vision API. This is a security feature, essentially, all Google APIs are disabled by default so if someone accidentally gets access they don’t unleash hell everywhere.

Now search for Vision and click the button. Here there should be a glaring Enable button. Press it.

The last thing we need to do is get the API key. This needs to be included in the API call headers for authentication.

Do not let anyone get your API key. And do not hardcode it in your code. Trust me, this will bite you. If this accidentally gets pushed onto the web, a web crawler will find it quickly and you will be paying bajillions of dollars.

Let this article scare you a bit.

Let’s go get your API Key. Find the Credentials section

You probably wont see any credentials created, as you’ve probably have not created any yet.

Let’s create a new API Key.

I’d name the key something meaningful and limit it to only the Google Cloud API.

Go ahead and copy your API key, as we will need it in the next step.

3. Raspberry Pi Side Setup

The articles listed at the top of this one will help you setup the Raspberry Pi for this step. But if you are doing things different, most of this should still work for you. However, when we get to the part of about environment variables, that’ll be different for other Linux flavors.

Start by SSH’ing into your Pi.

And update all packages

sudo pacman -Syu

We’re going to create an environment variable for the Google Cloud Vision API. This is to avoid hardcoding your API key into the code further down. That will work, but I highly recommend you stick with me and setup an environment variable manager to handle the API.

Switch to the root user by typing

su

Enter your password.

The next thing we do is add your Google Vision API Key as an environment variable to the /etc/profile file, this should cause it to be intialized at boot.

Type, replacing YOUR_API_KEY with your actual API Key.

echo 'export GOOGLE_CLOUD_VISION_API_KEY=YOUR_API_KEY' >> /etc/profile

Now reboot the Pi so that takes effect.

sudo reboot

Log back in. Let’s check to make sure it’s loading the API key.

echo $GOOGLE_CLOUD_VISION_API_KEY

If your API key is echoed back, you should be good to go.

4. Project Setup

Let’s create a project directory.

mkdir google-vis
cd google-vis

Now let’s initialize a new Node project.

npm init

Feel free to customize the package details if you like. If you’re lazy like me, hit enter until you are back to the command prompt.

Let’s add the needed Node libraries. It’s one. The axios library, which enables async web requests.

npm axios

Also, let’s create a resource directory and download our lovely test image. Ah, miss Hepburn!

Make sure you are in the google-vis/resources project directory when downloading the image.

mkdir resources
cd resources
wget https://ladvien.com/images/hepburn.png

5. NodeJS Code

Create a file in the go-vis directory called app.js

nano app.js

Then paste in the code below and save the file by typing CTRL+O and exiting using CTRL+X.

// https://console.cloud.google.com/

const axios = require('axios');
const fs = require('fs');

const API_KEY = process.env.GOOGLE_CLOUD_VISION_API_KEY

if (!API_KEY) {
  console.log('No API key provided')
} 

function base64_encode(file) {
    // read binary data

    var bitmap = fs.readFileSync(file);
    // convert binary data to base64 encoded string

    return new Buffer(bitmap).toString('base64');
}
var base64str = base64_encode('./resources/audrey.jpg');

const apiCall = `https://vision.googleapis.com/v1/images:annotate?key=${API_KEY}`;


const reqObj = {
    requests:[
        {
          "image":{
            "content": base64str
          },
          "features":[
                {
                    "type":"LABEL_DETECTION",
                    "maxResults":5
                },
                {
                    "type":"FACE_DETECTION",
                    "maxResults":5            
                },
                {
                    "type": "IMAGE_PROPERTIES",
                    "maxResults":5
                }
            ]
        }
      ]
}

axios.post(apiCall, reqObj).then((response) => {
    console.log(response);
    console.log(JSON.stringify(response.data.responses, undefined, 4));
}).catch((e) => {
    console.log(e.response);
});

This code grabs the API key environment variable and creates a program constant from it.

const API_KEY = process.env.GOOGLE_CLOUD_VISION_API_KEY

This is how we avoid hardcoding the API key.

6. Run

Let’s run the program.

node app.js

If all went well you should get similar output to below

data: { responses: [ [Object] ] } }
[
    {
        "labelAnnotations": [
            {
                "mid": "/m/03q69",
                "description": "hair",
                "score": 0.9775374,
                "topicality": 0.9775374
            },
            {
                "mid": "/m/027n3_",
                "description": "eyebrow",
                "score": 0.90340185,
                "topicality": 0.90340185
            },
            {
                "mid": "/m/01ntw3",
                "description": "human hair color",
                "score": 0.8986981,
                "topicality": 0.8986981
            },
            {
                "mid": "/m/0ds4x",
                "description": "hairstyle",
                "score": 0.8985265,
                "topicality": 0.8985265
            },
            {
                "mid": "/m/01f43",
                "description": "beauty",
                "score": 0.87356544,
                "topicality": 0.87356544
            }
        ],
  ....
]

6. And so much more…

This article is short–a jump start. However, there is lots of potential here. For example, sending your own images using the Raspberry Pi Camera

Please feel free to ask any questions regarding how to use the output.

There are other feature detection requests.

However, I’m going to end the article and move on to rolling my on vision detection systems. As soon as I figure out stochastic gradient descent.

1B1 Robot

Not too long ago there was a post on Hackaday about a little four-wheeled bot made with a Raspberry Pi and some eBay motor drivers.

Raspberry Pi Zero Drives Tiny RC Truck

I really liked the little chassis, ordered one, and was happy to find it was delivered with the motors already mounted. (As I become an aged hacker, it’s the little time savers which are genuinely appreciated.)

On buying the chassis I’d already decided to use one of my Raspberry Pi Zero W’s (rp0w) to control the bot. I really like Arch Linux on the rp0w. It’s light weight and the packages are well curated. Again, it’s the little time savers. I liked the combination even more since I found a way to set the rp0w headlessly, which meant I could go from SD card to SSH’ing into little Linux board.

Coincidentally, I purchased several DRV8830 modules from eBay. This is a sad story – I’ve played with the DRV8830 chip a long time ago:

Because Sparkfun did a great job of documenting the IC and creating an Arduino library to go with it. I was disheartened to find Sparkfun and EOL’ed the boards.

Probably because buttholes like me kept buying them off eBay. I’ve got some mixed feelings here – one of them is guilt.

Anyway, I was surprised to find the mounting holes on the DRV8830s matched a set on the chassis. I decided to attempt using one module to drive two motors, thereby only needing two DRV8830 modules to drive the entire bot.

I’ve had some thermal paste lying about for years–it works nicely as an adhesive. Also, I was hoping to use the chassis to heatsink the motor drivers.

A bit of a tangent. At work one of the skills which is useful for our team is being able to work with APIs. For awhile I’ve wanted to learn NodeJS, since it seems to be the goto framework for solid back-end business applications. It doesn’t hurt StackOverflow’s Developer Survey for the last few years has shown JavaScript is a solid language to stay sharp on. Specifically, being able to work within the NodeJS framework makes one pretty darn marketable.

Ok, for these reasons I decided to build this bot using NodeJS. I’ve written a separate article on setting up NodeJS, working with i2c-bus, and porting the DRV8830 Sparkfun library to NodeJS.

  • Not yet written (shesh, been busy. Judge much? :P)

It didn’t take time at all to get the little motor spinning using NodeJS largely due to Michael Hord’s (Sparkfun) MiniMoto library. (Again, some guilt here.)

I drove the motor shown using two series Li-Ion batteries connecting to a buck converter set to output ~5.0v. The motor spun nicely and pulled around 200mA. However, the real test would be connecting to two geared motors per DRV8830.

'use strict';
var i2c = require('i2c-bus'), i2c1 = i2c.openSync(1);
var sleep = require('sleep');
var drv8830 = require('./drv8830');

const motorAddressOne = 0x61;
const motorAddressTwo = 0x67;

var motor1 = new drv8830(motorAddressOne, i2c1);
var motor2 = new drv8830(motorAddressTwo, i2c1);

motor1.drive(50);
motor2.drive(50)
sleep.msleep(3500);
motor1.drive(-50);
motor2.drive(50);
motor1.stop()
motor2.stop()

It was time to wire up the chassis motors and create a test of the system. The wire used was some eBay single core aluminum wire (the cheap stuff). Wiring was pretty straightforward.

However, I did make a little i2c bus board from perfboard and JST connectors. Adding both ceramic and electrolytic decoupling capacitors for smoothing and to aid peak discharge.

Note the heaping amount of heatsink goop on the underside of the perfboard, this was a hacker’s solution to galvanically isolating the perfboard from the steel chassis.

One-B-One Schematic

+--------------+                    +------------------+           +------------------+
|              |                    |                  |           |                  |
|              +--+LEAD1+----+OUT1+-+                  |VCC----+5V-+                  |
|              |                    |                  |           |                  |
| Motor 1      +--+LEAD2+----+OUT2+-+   DRV8830+A      +----GND----+  Buck Regulator  |
|              |                    |                  |           |                  |
|              |                    |                  |           |                  |
|              |                    |                  |           |                  |
+--------------+                    +-----+---+--------+           +--+--+------------+
                                          |   |                       |  |
                                      SDA1|   | SCL1               5V |  | GND
                                          |   |                       |  |
                                          |   |                       |  |
                                          |   |                       |  |
                                          |   |                       |  |
                                     +----+---+--------+              |  |
                                     |                 |              |  |
                                     |                 |              |  |
                        +----+VCC2+--+  ADUM1250ARZ    ++VCC1+--------+  |
                        |            |                 |                 |
                        |   ++GND2+--+                 ++GND1+-----------+
                        |   |        |                 |
                        |   |        +----+--+---------+
                        |   |             |  |
                        |   |         SDA1|  | SCL2
                        |   |             |  |
                        |   |             |  |
                        |   |             |  |
                  +-----+---+-------------+--+-------+

                            Raspberry Pi Zero W

The ADUM1250ARZ is a bi-directional galvanic isolator for digital communication up to 1mbs. It’s the first chip I ever designed a PCB for and it’s still my favorite. Essentially, the ADUM1250 seperates the rp0w from the noisy motors – and more importantly, if I screw something up on the motor side, won’t kill my rp0w. The ADUM1250 is not necessary for most people, just me

The last bit I had to figure out was the the Raspberry Pi’s power. I attempted to use a single Li-Ion battery and a boost regulator to power it, but the regulator’s I bought were DOA.

Then I remembered the load-sharing and boost converter circuit salvaged from a battery bank. The charge circuit was built for Li-Po chemistry and the only Li-Po I had lying about was a 350mA. I wired it up and was surprised the whole thing worked, with the added benefit of being able to charge the rp0w battery without disconnecting it. Booyah!

The last bit I did for the video. I pulled the npm package keypress and wrote this little program.

'use strict';
var i2c = require('i2c-bus'), i2c1 = i2c.openSync(1);
var sleep = require('sleep');
var drv8830 = require('./drv8830');
var keypress = require('keypress');

const motorAddressOne = 0x61;
const motorAddressTwo = 0x67;

var motor1 = new drv8830(motorAddressOne, i2c1);
var motor2 = new drv8830(motorAddressTwo, i2c1);

// var speed = 63;

var turnSpeed = 33;
var driverSideSpeed = 63;
var passangerSideSpeed = 63; 

// make `process.stdin` begin emitting "keypress" events 

keypress(process.stdin);
 
// listen for the "keypress" event 

process.stdin.on('keypress', function (ch, key) {  
  if (key && key.ctrl && key.name == 'c') {
    process.stdin.pause();
  }
  switch(key.name) {
        
    case 'w':
        motor1.drive(driverSideSpeed);
        motor2.drive(passangerSideSpeed);
        break;
    case 's':
        var motors = [motor1, motor2];
        setDriveWithAcceleration(motors, driverSideSpeed, 10);
        break;
    case 'd':
        motor1.drive(turnSpeed);
        motor2.drive(turnSpeed*-1);
        break;
    case 'a':
        motor1.drive(turnSpeed*-1);
        motor2.drive(turnSpeed);
        break;
    default:
        motor1.stop();
        motor2.stop();
  }

});
process.stdin.setRawMode(true);
process.stdin.resume();

var setDriveWithAcceleration = function(motors, desiredSpeed, accelTimeMilliSec) {
    for(var i = 0; i < desiredSpeed; i++){    
        motors[0].drive(i);
        motors[1].drive(i);
        sleep.msleep(accelTimeMilliSec);
    }
}

Then, I shot the following video and called it donesies.

Editing Raspberry Pi Code Remotely from Visual Studio Code

I’m spoiled. I love the pretty colors of modern text IDEs. My favorite among them being Visual Studio Code.

I know it’ll engender a lot of bad rep with the old-timers, but I prefer the one on the right.

However, when working on a headless (no monitor) Raspberry Pi it felt like I was pretty much stuck with the nano.

Until! I discovered Visual Studio Code’s remote extension.

This allowed me to edit my Raspberry Pi files from within Visual Studio Code. So, I get all the joys of writing code directly on my Raspberry Pi, but with all the bells-and-whistles of Visual Studio Code (VSC).

For the most part, setup is pretty straightforward. But the Pi side can get tricky, so I’m going to walk us through the process.

1. Get Visual Studio Code

Download the version of VSC for your PC. Note, you aren’t running this from the Raspberry Pi–instead, you’ll be running it from the PC and connecting it to the Raspberry Pi.

After it’s downloaded and installed open it up.








Once open, click here




Ok, now search for the extension called

Remote VSCode

And hit the Install button. Once it finishes hit the reload button.

The extension works by creating a server which listens for incoming calls from the Raspberry Pi. Once we finished setting up the Raspberry Pi we will use a special command which sends the file on the Raspberry Pi to Visual Studio Code. However, when it’s all done, it’ll look pretty seamless.

Back to setup.

In Visual Studio Code type F1 and type Preferences: Open Workspace Settings

Find the section labeled

remote.onStartup: false

We need to change it to true by clicking on the pencil next to its name. This sets the listening server to start every time you open Visual Studio Code.

Almost there. Now to setup the Raspberry Pi. We need to install a program on the Pi which will send a file of our choosing to Visual Studio Code to be edited. RMate was my choice.

Start by SSH’ing into your Raspberry Pi as root.

Run an update

pacman -Syu

Let’s install ruby and supporting packages.

pacman -S ruby ruby-docs ruby-rdoc
sed "s|unset appendpath|appendpath \'$(ruby -e 'print Gem.user_dir')/bin'\\nunset appendpath|g" /etc/profile >> /etc/profile
source /etc/profile

If it installs, then we setup the remote correctly. If not, feel free to ask debugging questions in the comments.

Now we’ll install the needed Ruby gems.

gem install rmate
gem install rdoc

The above commands install Ruby, moves to to the user’s directory, uses the Ruby package manager to install rmate, then adds Ruby and it’s Gems (packages) executables to the environment variables. All of this is necessary to get Rmate working on Arch Linux.

Ok, let’s test it. Stop SSH’ing into your Pi by typing exit until it brings you back to your PC’s prompt. Now we are going to SSH into the Pi while listening for incoming files to be displayed in Visual Studio Code.

Open Visual Studio Code and open the integrated terminal (if it’s not showing hit CTRL + `).

At the terminal type

ssh -R 52698:localhost:52698 alarm@192.168.1.x

Replace the x with your Pi’s ip address.

This should SSH into the Pi while listening for files.

At the pi command prompt, type

rmate test.js

This should open a new file called test.js in your Visual Studio Code.

Now you get all the goodness of the VSC IDE, such as syntax highlighting, linting, etc!

A few notes. File permissions still apply, so if you want to be able to save a file the user you logged into on the Raspberry Pi and rmated the file must have write permission on the file.

However, if you do have write permissions, then the “File Save” function in the VSC editor will update the Raspberry Pi file with your modifications. Booyah!




One last annoyance to address. Whenever you want to use VSC to edit your file you have to log into the Pi using

ssh -R 52698:localhost:52698 alarm@192.168.1.x

This annoyed me a bit. I could never remember all that. Instead, I created a small bash script to help.

On my PC (for Mac and Linux, Windows, you’re on your own) I created in my home user directory called

vs

And added the following to the file.

echo $1
ssh -R 52698:localhost:52698 "$1"

Essentially, this script takes your Pi’s login information and logs in to your Pi using the VSC Remote Extension listening.

To get it to work you’ve got to make the file executable

sudo +x chmod vs

Then login in your Pi like this

./vs alarm@192.168.1.x

Hope you enjoy.

Oh, and for you web-devs, this also works for remote servers. Just replace the Pi with the server.

Porting DRV8830 I2C Motor Driver Code to NodeJS

Earlier in this article series I showed how to install NodeJS – it was pretty simple with an install script. However, I thought I better show how I actually worked with NodeJS to create my little 1b1 driver code.

Again, simple, I used others hard work. Specifically, Michael Hord with Sparkfun’s MiniMoto library.

Really, all I did was tweak the code a little bit to fit JavaScript syntax.

The result

'use strict';
var i2c = require('i2c-bus');
var sleep = require('sleep');

// Commands

const FAULT_CMD         = 0x01;

// Fault constants

const CLEAR_FAULT       = 0x80;
const FAULT             = 0x01;
const ILIMIT            = 0x10;
const OTS               = 0x08;
const UVLO              = 0x04;
const OCP               = 0x02;

// Direction bits

const FORWARD           = 0b00000010;
const REVERSE           = 0b00000001;
const HI_Z              = 0b00000000;
const BRAKE             = 0b00000011;

module.exports = class Motor {

    /** 1. Add "inverse" motor option
     *  2. Add option to clear fault on each motor call.
     *  
     */

    constructor(address, i2cbus, options = undefined) {        
        this.address = address
        this.i2cbus = i2cbus
        this.options = options
    }

    getFault() {

        var fault = {
            message: '',
            code: 0
        }
    
        var faultCode;
        try {
            this.i2cbus.readByteSync(this.address, FAULT_CMD);
        } catch (e) {
            console.log(`Read fault failed: ${e}`)
        }
        
        fault.code = faultCode;
    
        if (faultCode !== undefined) {
            console.log(faultCode);
            fault.message = 'Unknown fault.';
            switch (faultCode){
                case FAULT:
                    fault.message = 'Unknown fault.'
                    break;
                case ILIMIT:
                    fault.message = 'Extended current limit event'
                    break;
                case OTS:
                    fault.message = 'Over temperature.'
                    break;
                case UVLO:
                    fault.message = 'Undervoltage lockout.'
                    break;
                case OCP:
                    fault.message = 'Overcurrent lockout.'
                    break;
                default:
                    fault.message = 'Unknown fault.'
                    break;
            }
            return fault;
        } else {
            fault.message = 'No fault';
            return fault;
        }
    }
    
    clearFault() {
        var fault = this.getFault(this.address);
        if (fault.code) {
            try {
                var success = this.i2cbus.writeByteSync(this.address, FAULT_CMD, CLEAR_FAULT);
                if (success) { return true; }
            } catch (e) {
                console.log(`Failed to clear faults: ${e}`)
            }
        }
        return false;
    }
    
    drive(speed = 0, direction = undefined, checkFault = false) {
        // The speed should be 0-63.

        if (checkFault) { this.clearFault();}
        if (direction === undefined) {        
            direction = speed < 0;
            speed = Math.abs(speed);
            if (speed > 63) { speed = 63; }
            speed = speed << 2 ;
            if (direction) { speed |= FORWARD; }
            else           { speed |= REVERSE; }
        } else {
            speed = speed << 2 ;
            speed |= direction;
        }
        try {
            this.i2cbus.writeByteSync(this.address, 0x00, speed);
        } catch (e){
            console.log('Drive command failed.')
        }
    }
    
    brake() {
        try {
            this.drive(0, HI_Z);
        } catch (e) {
            console.log('Brake command failed.')
        }
    }
    
    
    stop() {
        try {
            this.drive(0, BRAKE);
        } catch (e) {
            console.log('Brake command failed.')
        }
    }
}

There’s a lot left to do, but it works.

Todo List:

  1. Have the constructor accept an options object
  2. Add read() to get the current speed which a motor is set.
  3. Refactor option to clear faults on write to be determined during construction
  4. Add acceleration and deceleration algorithms add functions.
  5. Create an async polling of fault codes.

But! For now it works.

Also, or those who are like, “You stole code, dewd! Not cool.” Mhord’s code has a beerware license. I sent this email to Sparkfun in regards to the license and how I might pay Sparkfun back for their work.

Hey Mr. Hord,

I’m in the process of porting your DRV8830 library to Node–I wanted to make sure I give appropriate credit.

https://github.com/Ladvien/drv8830

Also, was going to ship some beer to Sparkfun–in respect of the beerware license. Just let me know what kind.

Lastly, I wanted to make sure Sparkfun benefits. It looks like the DRV8830 TinyMoto board has been discontinued. > Should I recommend people roll their own…or gasp get something off a slow ship from China? —Thomas aka, Ladvien

But I didn’t hear back. C’est la vie