to author or to fork?

I was interested in exercising Github’s REST API so I burned out a quick-and-dirty applic-ation to display some statistics.

Screen Shot 2019-01-02 at 1.53.46 PM

Honestly, a tool like this would be useful for a hiring manager in the software develop-ment space. Imagine being able to enter a list of ten accounts and to see a side-by-side comparison of the coders like this.

Puffed-up Like a Cheeto

I’m surprised at the number of Github accounts which are mostly filled with dead forks of someone else’s code with no contributions whatsoever. I don’t know if people are trying to pad their profile intentionally or if they just are unclear of the cloning behavior expected of them most of the time.

A good collection of code should include mostly your own authored work. You’re hoping to give something back to the community. From the standpoint of your résumé, you’re hoping to show what kind of work you’re capable of doing.

So What’s Good?

I think I’d suggest that for anyone who’s looking for a new position as a coder, that Authored percentage value should be above 75%. I suppose the theoretical limit of 100% could potentially be the best and yet it would likely indicate that you don’t help out other coders with their repositories.

Rule of Thumb

If you fork a repository, you should do one of two things:

  1. immediately start creating your own new software from it or…
  2. immediately start working to help the original author so as to create a pull request.

This behavior of fork-and-do-nothing just seems patently wrong to me. If you think about it, it’s almost the equivalent of copying someone else’s résumé content into your own.

intel edison

I recently purchased the (now discontinued) Intel Edison Breakout Board Kit from Fry’s Electronics. I’m guessing that I overpaid for this product offering by Intel since they’re only $23 at the time of this writing.

edison

I assume that there was a moment a few years ago in which Intel must have thought that they needed to enter into this whole IoT business and rule the space, given their history. I’m sure they were made confident in the sheer volume of Raspberry Pi and Arduino boards being shipped each year. “How hard could this be?”, I’m sure they asked themselves before venturing out into terra incognita.

Setting it up

Setting up the board was a bit different from earlier attempts with either Raspberry Pi or Arduino boards. Intel decided that it would have you use a paired-down Linux customization called Yocto to generate the operating system. The result is a slim o/s with just enough breathing room for things to run.

Additionally, it uses not one but two micro-USB cables to your workstation for a fair bit of that setup which seems unique. The first connection powers the Edison and creates a virtual network adapter and can be used to flash the code. The second is strictly serial and can also be used to communicate with the board. From the specifications, it includes two UARTs, for what it’s worth. Once setup, you can power it from the single connection, however.

At times, this duality can lead to trouble as seen when attempting to connect the Edison within Intel’s System Studio software. It was unable to connect using the hostname alone since this would try to use the wi-fi connection rather than the (expected) serial connection within their own software.

The Edison comes equipped with both Bluetooth and wi-fi. I would like to say that setting up the networking was easy; it wasn’t. I found the labyrinth of documentation to be daunting at times. The initial suggestion to get the chip running simply failed. I then had to do enough research to chase an alternative path to setting it up by using their Platform Flash Tool Lite version. Having then successfully connected the wi-fi to my network, I then attempted to see what was under the hood.

NodeJS

I was pleased to see that their own configuration utility which boots by default runs as a Node service. Once configured, the web interface provides little information than you probably already knew by inspection. They call their implementation of Node Intel XDK which is of course discontinued as well now.

System Studio

Intel provides an IDE for programming these devices. One needs to register in order to download the software. Having installed the interface, it’s easy to be impressed at how complicated the interface looks. It’s a lot like Microsoft Visual Studio with its number of panels and such.

Unfortunately, all this doesn’t work—we’re just talking about the “Hello World” example and it simply doesn’t work on the Edison. A single shell script called device-detection.sh does not appear to include the code for the device and further, throws a fatal syntax error in the Yocto bash itself.

Unfortunately, this means that any code compiled for the Edison uses the wrong target and so won’t run. Searching within their user community forum doesn’t result in anything useful so I’ve decided to abandon System Studio for the moment.

Arduino software

It looks like another option is to use the Arduino IDE software to push code to the Edison, assuming that we’re talking about C++ or similar compiled code. I haven’t tried this yet but I’m not sure if I really want to leave the relative comfort of JavaScript for C++ for this project anyway.

GPIO pins

Like a Raspberry Pi or an Arduino board, the Edison has GPIO pins. They’re just available on the back of the breakout board in this case. There is also a space for adding a barrel connector for power, should you want.

edison_rear

Overall impression

At a discounted price of $23, this falls into the middle range between a Raspberry Pi Zero W ($5 plus $6 for microSD) and the Raspberry Pi 3B ($35 plus $6 for microSD) price points. It can host a diminished Linux stack, serve up Node applications and appears to have two full UARTs at your disposal unlike the Raspberry which only has one full UART.

It’s probably okay for a few IoT projects but I doubt if I’d try to spin up a grand solution given its discontinued status. It was a good exercise in getting my feet wet with my first Yocto configuration at least.

I find myself disappointed with Intel’s inability to succeed within this product space. I could only guess how they failed at this; I have to assume that the right people skills were not included in the teams which contributed to this.

tanks a lot

I decided to build a very cool-looking robotic tank kit which is made by OSEPP. They have a variety of grown-up toys like this in the geekspace.

I guess I’ve been inspired lately by some of the local meetups which involve races with autonomously-driven cars.

To build this, I find that a surprising amount of hardware is going into this project as well as several programming languages all at once. I’ve had to bounce back and forth between Python and C as I interface the Raspberry Pi Zero W computer with the Arduino Mega 2560 R3 Plus board. This Arduino doesn’t come with Bluetooth, wi-fi or even an Ethernet jack so I opted to add in the Pi since it’s inexpensive and comes with a full operative system. The Pi of course includes a webcam for initially allowing the remote control features to be easy. Later, that same camera will be used to generate images to be processed for autonomous driving.

DSC_0072

Repository

Update:

I decided to design some plastic parts for the tank. It’s now looking awesome, has some quick-release pins and I’ve purchased a 12-battery AA charger and batteries for the project since it seems to be hungry for power.

DSC_0073

DSC_0074

Screen Shot 2018-09-12 at 2.00.44 PM

The first three attempts at managing the tracks for steering didn’t seem accurate enough for my own driving-related expectations at least. I finally had to resort to trigonometry in the last set of calculations; this appears to be a more natural steerage interface.

It looks like the first two phases of the project are now complete and I’m well into the third (autonomous) phase now.

Autonomous (Self-Driving) Mode

Next up is the part where a service is taking snapshots from the camera and then using this to make steering decisions. The strategy here is to use data image processing to find the road, so to speak, our position relative to the path ahead as well as any competitors also on the track.

The first interesting piece of the data processing involves some linear algebra and a variety of matrices which perform distinct functions, if you will. You basically multiply a particular matrix for a 3×3 array of pixels to replace the center pixel’s color in each case. The first and most useful matrix is named findEdgesKernel and looks like this:

-1 -1 -1
-1  8 -1
-1 -1 -1

This will allow a new, simpler image which should highlight only the track (masking tape) for the path ahead. This part is working quite well so the next step is to process this resulting path image to determine how the tank should steer both now and in the near future.

Screen Shot 2018-09-12 at 2.03.10 PM

the braille project

Yesterday, I designed a Node-based program to generate a 3D mesh file programmatically from the input text to create a braille message.

Screen Shot 2018-07-03 at 5.22.55 PM

The concept is easy enough to grasp. Braille is a simple combination of raised dots. If we can know that combination, then it should be easy enough to design a 3D CAD object which uses tiny spheres to render the scene.

But I didn’t want to laboriously design this in Autodesk Fusion 360 and I’m sure few people would. Everything has to be precisely placed and that’s just too much manual work. Even if you did, it’s not very easy to maintain. If you did catch an omission, just think of all the work you’d have to do to move things around! I’m relatively certain that this is currently how people create braille-based printouts as seen on an ATM machine, for example.

3d-braille

So yesterday, I designed and created a program for doing this. Generating the STL file was then painless and took less than a second. Printing it then took five hours so I got to see it as a finished part this morning.

IMG_0038

logistics for the black pearl lcd theme

I decided to add more to the earlier Black Pearl Conky theme for my 3D printer’s TFT screen. It turned out to be a lot easier to do since I’d just finished a new module for OctoPrint.

octo-client:  A node-based module for directly talking to OctoPrint to gather raw information.

octo-conky:  A Conky script for returning that information in a pleasing way.

The new information is there after the “Black Pearl v1.0.1” line where it pulls the version and temperature from the printer.

IMG_0037

j.a.r.v.i.s. realized

If you remember from my earlier post, I wanted to build the cool AI interface from the Iron Man movie series: J.A.R.V.I.S., as voiced by Paul Bettany.

jarvis

Well, I’ve done it. I wrote up several intents in an Amazon Alexa Skill, created an Amazon Lambda function as the end-point, created a proxy in Node (which is served up by a Raspberry Pi Zero W single-board computer) to forward inbound Internet traffic and I’m now able to ask an Amazon Echo Dot how my printer is doing at home.

EchoDot

Remotely Control a Printer

For example, I can say:

Computer, ask Jarvis for my printer’s status.

…to which she will reply:

charming-pascal is ready and operational.

Now remember, I’m two miles away from home while I’m doing this and all of this still works.  I could ask:

Computer, ask Jarvis which file is selected.

…and she’ll say:

RC_microSD-clip.gcode is currently selected.

This is useful to know when I later code this up to remotely print a job as well. I can also ask:

Computer, ask Jarvis for the job status.

…and the reply might be:

charming-pascal is finished printing RC_microSD-clip.gcode

In the collection of skill intents, I now have the following:

  • Stop the print job
  • Start the print job
  • Pause the print job
  • Resume the print job
  • Ask for the print job status
  • Ask for the selected print job file
  • Ask for help
  • Open the Jarvis app

And I’ll need other intents to select a file to print, preheat the extruder and possibly other things yet unimagined.

I’ll definitely want to remotely see the output of the internal webcam inside the printer to make sure that it’s happy; sometimes print jobs go afoul for a variety of reasons.

Remote Power Control

In addition, I also purchased a TP-Link Smart Plug to control power to the printer. I now have an Alexa skill to turn the printer on and off remotely.

tp-link

Computer, turn on my 3D printer.

add comments to a gcode file

I’ve just written a new command-line tool (CLI), this time in NodeJS/JavaScript but as usual, it’s open-source. The program will create a new version of your 3D printer’s GCODE file, adding comments along the way which describe what each command does.

repository

I would suggest that it’s best to install it somewhere in your path and then you should be able to just invoke it easily in your working directory where the GCODE file(s) live:

 

$ gcode-comments file.gcode

;FLAVOR:RepRap
;TIME:11265
;Generated with Cura_SteamEngine 2.3.1
M104 S205            ; Set extruder temperature
M109 S205            ; Set extruder temperature and wait (blocking)
;LAYER_COUNT:28
;LAYER:0
M107                 ; Turn off fan
M205 X10             ; Adjust jerk speed
G1 F2400 E-1         ; Move and/or extrude to the indicated point
...

Input:  file.gcode
Output: file_commented.gcode

the matrix linode’d

Today’s review is about a pretty decent hosting company called Linode. Here’s the three-day timeline from idea to implementation:

  1. Thursday: decided to create a new info website about 3D printing, bought the domain name on GoDaddy and waited for the changes to take effect at midnight
  2. Thursday: created a Github repository to store the source for the website since I’m open-source like that
  3. Thursday: created an account on Linode, purchasing a “linode” for that
  4. Thursday: designed/created the initial local/development website layout/framework, collected images and content
  5. Friday: created (provisioned) the basic linode (virtual machine) for the website on production, provisioned a virtual drive, deployed Ubuntu 16.04LTS onto that, booted and remoted into that, ran updates, installed the framework, added the website, setup security and the firewall
  6. Friday: adjusted the DNS at GoDaddy to point to the new server, added more content
  7. Saturday: launched the website on Linode with the initial version
  8. Saturday: tweaked the settings to make the Node.js—based website start on bootup
  9. Saturday: added more content to the website

site

Not bad for an open-sourced 10-page (44 files) responsive website, if I do say so myself.

Please note that when I say “hosting company”, I really mean a “virtual server provider” so this is more like Amazon EC2 as a service offering. I didn’t just rent website space (like on Wix.com or WordPress.com), I rented an entire virtual server, if you will.

Comparison of virtual server versus website space

There are some advantages/disadvantages of renting a virtual server over just some website slot on a server somewhere:

Pros:

  • In theory, you could run several websites from a virtual server
  • You can run services in the background (like Node.js) and manage them
  • You can run multiple threads on the same server, like helper routines which do something other than serving up pages
  • You’re not limited to the set of templates that are available from Wix.com, for example
  • Your website runs separately from other websites
  • You get an IP address which is only used for your website

Cons:

  • You have to setup security yourself since you’re responsible for the entire server
  • The learning curve is steeper
  • You have to know I.T. things like setting up servers and installing software

Framework/software

Here’s a list of what I used for this website:

  • Node.js: Probably the most famous event-driven JavaScript runtime engine out there
  • Express: A minimalistic Node.js framework for separating code from content on a website
  • Bootstrap.css: A responsive stylesheet and component library for styling a website
  • PM2: A handy process manager for Node-based applications on a server.  After pulling new code, I might run the command pm2 restart AppName to restart the service

Documentation

Kudos to Linode for providing a detailed Getting Started guide along with several tutorial videos on the subject.

And further, a note of thanks to PM2 which seems to satisfy the requirements of bringing up and managing a Node.js application as a service within a production environment and their ample documentation.

Suitability

Is Linode well-suited for most website designers/developers? Probably not. On an I.T. complexity scale from 1 to 10 potatoes, I’d say they’re probably seven potatoes, perhaps. In this case, you’d have to be comfortable doing the following:

  • Using a web-based console to allocate and bring up/down a virtual server
  • Using ssh to remote into your virtual server
  • Navigating within a command line interface on a Linux computer or similar
  • Using ssh-keygen to generate a keypair
  • Using apt-get to update things
  • Editing files using nano
  • Managing services, reading log files
  • Remotely rebooting your virtual server
  • Setting up a firewall, testing and managing same
  • Applying code using git
  • Testing a website to verify that there are no 404 (file not found) type of errors, for example
  • And obviously, creating/designing a website in the first place and using a repository like Github for storing those files

That said, it was a perfect fit for me since I can do those things. In fact, the Linode-related part of this took no more than two hours since this is the first time I’ve used their interface. My next one should go much faster.

Observations

I will say that I’m impressed. Unlike Amazon AWS, Microsoft and Google, the people at Linode haven’t created an interface that’s overly complicated. It seems to work simply and to do the things you need to do and those are: 1) buy a virtual server, 2) deploy something onto it, 3) turn it on and 4) remote into it. I don’t think the “big three” have figured this out yet; their interfaces and the assumed workflow requires too much research, in my humble opinion.

Additionally, the PM2 software does a great job of working with the git-based code distribution model, allowing you to restart the Node.js app when it’s required and to start up automatically each reboot. There’s an easy-to-remember command interface like pm2 show AppName which tells you what you usually want to know.

At a cost of $5/month, it compares favorably to most of the well-known hosting providers out there. The basic linode will likely satisfy the requirements of the average Node.js application up to a reasonable level of simultaneous users, I’d suggest.

price

blinking the raspi’s built-in LED

I’ve just added a repository of some JavaScript code to take over and exercise the built-in activity LED on a Raspberry Pi Zero W (and presumably other models). It’s called gpiozero-toggle-led and it’s a pretty simple interface with installation instructions and some sample code. It works with the underlying js-gpiozero JavaScript port of the popular original Python code. This would be an excellent way of simply demonstrating GPIO without any additional wiring, components, breadboards, extra power supplies or electrical knowledge (like finding a 330-ohm resistor using its color bands).

zero-wireless

Note that the “zero” in the title of the repository and in js-gpiozero does not refer to the Raspberry Pi Zero but to the original gpiozero Python library.

This should remove some of the guess work when attempting to use the relatively-new library since their documentation examples at the moment are taking a back seat to their code port from the more-extensive Python offering.

This approach can easily be modified to instead exercise external LEDs (as soldered or otherwise attached to the header pin locations seen below).  Note that you’ll use “BCM numbering” for APIs such as this one. For external LEDs, you would need to connect it inline with a resistor from a selected pin to one of the grounds for this to work with correct orientation of the LED’s anode/cathode, of course.

raspberry-pi-pinout

If you’re trying to use this with a Raspberry Pi of a different model, you’ll likely want to adjust the JavaScript slightly as seen below.

/routes/index.js:

// Existing code, for a Raspberry Pi Zero
var ledActivity = new LED(47, false);
// For Raspberry Pi 3, for example
var ledActivity = new LED(47);

And that’s it. Since the Raspberry Pi Zero assumes an opposite value for true/false than the bigger models, it’s necessary to configure this in the device constructor to make things work as expected. Since BCM pin 47 is the activity light on the board itself, this will allow you to control it.

how cool is electron?

I’ve been working the past couple of days with Electron, a Node.js cross-platform desktop app tool which uses JavaScript, HTML and CSS to create what look like native OS-style applications for Windows, OS X and Linux.

electron_atomelectron

Cool stuff, indeed. Out-of-the-box, it looks like you publish your Electron-based app like you would anything on github:

git clone https://github.com/Somebody/Repository.git
cd Repository
npm install
npm start

But there’s also a way of downloading OS-specific images and then adding your own app into this subdirectory structure. The result is a stand-alone EXE and folderset which reasonably looks like a drop-in replacement for something you normally would build locally using Microsoft Visual Studio perhaps. In this version though, you’d run Electron.exe but there are instructions on their website for renaming your application, updating the icon’s, etc.

I’ve just used it today to build a basic music player. I wouldn’t say that the layout is as responsive as a typical mobile app’s ability to move content but I did tweak things so that it can squash down to a mini-player and it stills looks great.

mplayer

I can thank KeithIG/museeks for the open-source code behind this. They have several OS-specific downloads available if you don’t want to build this yourself.

Pros

  • This allows you to build cross-platform desktop apps in much the same way that you’d use Adobe PhoneGap, say, to build for mobile apps.
  • You code in the familiar HTML/JavaScript/CSS trilogy of disciplines and it’s Node.js centric. It is also React.js-friendly, as I’m finding on this project.
  • So far, it seems to be well-behaved.
  • If you don’t want others to easily see your code, there’s a step where you can use asar to zip-up everything into a tidy package.
  • I didn’t have to digitally-sign anything like you might have to for a Windows 10 application or for OS X, say.
  • For people who have git and npm, the install is as easy as anything you’ve seen in the open-source space and a familiar workflow.

Cons

  • Currently, I don’t see any support for mobile platforms.
  • The complete foldedset comes in a 216MB which strikes me as a little big for what it’s doing.  The app itself for the music player weighs in at 84MB of this so the remainder is everything that Electron is doing to present all this.
  • You would need to setup three different build sites to maintain a specific download for your own app.  (It’s not like PhoneGap in which you just submit the common code and Adobe builds it in the cloud.)
  • Given that you’re not digitally-signing your code, you might have to talk your users through the hurdles of having the user “trust” the content within their particular OS.
  • This might be so popular soon that none of us can really afford to just use Electron.exe by default to serve up our app; we’ll need to rename it before publishing, in other words.

Overall

I can see myself wanting to really learn this one deeply. It has a lot of potential for delivering a more native-app experience for users.