to type or not to type…

that is the question.  Rather than a Shakespeare reference, I’m here referring to a term in software development which determines how a language deals with variables, for example.

Define: type

When you create a variable in a computer language, it’s usually something like this:

var someVarName = 1;

In a case like this, we might infer that someVarName stores a number (an integer).  We might say that the someVarName‘s type is integer.  Using a pet-ownership metaphor, it’s like purchasing a dog house first (“someVarName”) and then next buying a dog to put into it (“1”).  You wouldn’t buy a fish bowl to store a dog… although this seems to work out great if you own a cat.  JavaScript, e.g., is like this picture:  it doesn’t seemingly care if you want to store a cat in a fish bowl.

cat-in-a-bowl

Two Schools of Thought

There are two camps out there:  those who like languages which force the variable type and those who don’t.

A statically-typed language usually involves a step in which your code is converted into something else (compiling) and any type-related issues must be fixed before a program can be created.

A dynamically-typed language is run “as is” and the code is evaluated at the moment of truth—determinations about the type of a variable are made at this time.  If there is a type-related issue, your end-user could be the first person to see the error.

Statically-Typed Dynamically-Typed
Java JavaScript
C++ Python
C# PHP
C Objective-C

The Pendulum Swings

Over the past three decades, the popularity of either approach has waxed and waned.  It’s safe to suggest for the moment that the less-strict languages are gaining rapidly in popularity over their stricter counterparts.

most-popular

We have the world of open source to thank for the popularity and speed of development we’re currently seeing in these dynamically-typed languages like JavaScript and Python.

Seeing the Future

Honestly, though, there are too many people in that strict-is-better camp and their influence is felt within software development companies.  If I were to guess at the future of JavaScript, I’d probably have to say that TypeScript and Flow will gain in popularity as larger development teams look to lower the number of bugs in their code.

I don’t know, though.  Maybe it’s time that we just relax and let the cat hang out in the fish bowl.

 

blinking the raspi’s built-in LED

I’ve just added a repository of some JavaScript code to take over and exercise the built-in activity LED on a Raspberry Pi Zero W (and presumably other models). It’s called gpiozero-toggle-led and it’s a pretty simple interface with installation instructions and some sample code. It works with the underlying js-gpiozero JavaScript port of the popular original Python code. This would be an excellent way of simply demonstrating GPIO without any additional wiring, components, breadboards, extra power supplies or electrical knowledge (like finding a 330-ohm resistor using its color bands).

zero-wireless

Note that the “zero” in the title of the repository and in js-gpiozero does not refer to the Raspberry Pi Zero but to the original gpiozero Python library.

This should remove some of the guess work when attempting to use the relatively-new library since their documentation examples at the moment are taking a back seat to their code port from the more-extensive Python offering.

This approach can easily be modified to instead exercise external LEDs (as soldered or otherwise attached to the header pin locations seen below).  Note that you’ll use “BCM numbering” for APIs such as this one. For external LEDs, you would need to connect it inline with a resistor from a selected pin to one of the grounds for this to work with correct orientation of the LED’s anode/cathode, of course.

raspberry-pi-pinout

If you’re trying to use this with a Raspberry Pi of a different model, you’ll likely want to adjust the JavaScript slightly as seen below.

/routes/index.js:

// Existing code, for a Raspberry Pi Zero
var ledActivity = new LED(47, false);
// For Raspberry Pi 3, for example
var ledActivity = new LED(47);

And that’s it. Since the Raspberry Pi Zero assumes an opposite value for true/false than the bigger models, it’s necessary to configure this in the device constructor to make things work as expected. Since BCM pin 47 is the activity light on the board itself, this will allow you to control it.

o please, gentlemen, a little bluer…

Today’s inventiveness involves a new teaching method for music, a synesthetic approach to colorizing musical notes. The title’s quote comes from Franz Liszt, a 19th-century composer who was a synesthete—he saw music in full color.

Although western doctors probably think of synesthesia as a malady, I would suggest that it is a product of beneficial neuroplasticity. The brain has cross-wired itself across the senses to allow for better recognition and appreciation of something. There’s a long list of famous musicians and composers who wrote of this personal condition and in each case it helped them to succeed.

vexflow-syn

In order to promote this cross-wiring in young musical students, I’ve created a repository to colorize musical notes in client-side JavaScript. I’ve developed an organized method for this and have described the process there.

Compatibility

Given that the client-side JavaScript approach requires the newer HTML5 canvas features, this will work on newer browsers (and seems to be working in IE11 if you “allow blocked content”).

Musical Talent

I have always had a fondness and an early aptitude for music. In fact, I had such a brilliant audible memory and an ability to play anything I’d just heard, that I used this as a crutch when confronted with the task of learning to read musical notation. I didn’t actually have to read the notation in band since the sound of the music was in my head. So although I was a slow reader with respect to notation, nobody actually could tell.

My earliest formal training was for the saxophone, noting of course that you only play a single note at a time. Unfortunately, this led to my later difficulties in learning to play the piano in my thirties. Piano chords on a stave? To me, this just seemed like jumbles of notes piled on top of each other. I had no easy way of interpreting what I was seeing.

After many weeks of painstakingly trying to decypher these heiroglyphics, if you will, I began to have a small breakthrough. My brain started to recognize some patterns. Due to some unfortunate timing, I had to stop all this training and abruptly move and had to sell the piano. It would be another decade until I’d bought another piano to re-learn piano notation.

Attacking the learning of chords-in-notation anew, I realize that colorizing the notes would be a benefit to me.  All C notes are red.  All E notes are yellow.  C-E-G are primary colors (C-maj).  The Eb in the middle of the C-min chord is more orange than the original yellow. A synesthetic approach to musical notation is a wonderful adaptation to a centuries-old teaching methodology I’d suggest, at least in my own case.

tommy can you hear me?

Okay, so this week’s invention involves me being uncharacteristically-cheap. If you usually read about the gadgets I buy, you should know that I’m seriously into making things and using a variety of tools that I need to get these projects done. I’m happy to buy something if it’s worth the cost.

For these monthly talks I’ve been giving, I wanted to have a hands-free option for the Sennheiser FM transmitter. Because when you’re giving a talk and you’re also typing on the computer it just makes sense that both hands need to be free to do that. And yet, the Sennheiser has a propriety 1/8″ jack that makes it difficult to shop anywhere but their website for accessories.  And their cheapest headset is still hella expensive (~$180) and it’s just a standard headset with a standard microphone with a propriatary plug.  <_<  So I decided to try to build an entire rig instead.

The Gear

Of course, I’m starting things with a Raspberry Pi 3 at the moment but will likely port this over to a Raspberry Pi Zero W when I get things working.

raspberry-pi-3

I just picked up a digital USB microphone from Radio Shack (since they’re closing almost all the stores here in San Diego) so that was a mere $10 and has great quality in a tiny package.

mic

At REI, I snagged an FM radio so that I could do the development and listen in on the transmitted signal.

midland

The Sennheiser at the venue looks like this.  At the last monthly talk I took a photo using my phone so that I could record their tuning for their setup.

sennheiser

You can’t see the hand-held microphone on a cable from this stock photo but it’s kind of a pain, as it is right now.

Progress

So far, things are looking pretty good. I’m able to record from the microphone using the alsa-utils arecord program. I’m able to convert the output WAV file into something suitable for re-transmission. And I’m able to broadcast the signal from a GPIO pin on the Raspberry on a selected FM frequency. I believe I can make a longer antenna that should work out.

What’s missing at the moment is a way to (correctly) daisy-chain each of the commands together so that things will continuously transmit, say, upon startup.

arecord -D plughw:1 -f S16_LE -r 48000 - | ./pifm - - | sudo ./rpitx -m RF -i - -f 87900

Something like that, anyway. Any yet, it doesn’t seem to work like this.  The various, raw “-” hyphens as seen throughout are supposed to represent STDIN/STDOUT for streaming commands from one to the next. Many times this works as expected, albeit with the odd hyphen showing up here and there.

Anyway, things like this take a lot of hacking at the problem to get it solved. Perserverance usually wins a game like this.

Update

And of course the solution was a slight tweak to the earlier attempt.

arecord -D plughw:1 -f S16_LE -r 48000 /dev/stdout 2> /dev/null | ./pifm /dev/stdin /dev/stdout | sudo ./rpitx -m RF -i /dev/stdin -f 87900

mini digital storage oscilloscope review

I just got in the decidedly-cute DS203 Mini DSO (digital storage oscilloscope), weighing in at a mere 80 grams. We can reasonably guess from MiniDSO.com’s website that English is a second language for them. From what I understand, this is an open-source project so it will be fun to see what I can do with this.

SainSmart
K1, K2, K3, K4 & NAV A, NAV B across the top

Open Source

From what I’m reading in an online PDF, you can tether this to a PC and it appears as a USB drive, allowing you to make some modifications to the system itself. There appear to be examples for updating the splash screen logo and downloading/updating the application itself. Since this is likely some sort of Linux as the operating system then that will mean that I might be able to hack apart the update to find out what’s inside.

Precision

Looks like there are six adjustable potentiometers “under the hood” to allow you to calibrate it for accuracy. Most full-sized scopes have this feature but usually only about two of these adjustments, to be honest.

Accessories

It was fully assembled in the box although the online PDF suggests that there was a time when the customer was asked to fully put it together. This one included two probes (1X, 10X) which is pretty generous given that they can be as much as $30 each. It includes a small hex wrench for opening the back (access to those potentiometers). And finally, there was a tri-fold card with the barest of instructions possible. Here’s an example of a third of the instructions:

Turn on the power, enter the main page of the oscilloscope. Place in the standard signal (e.g. square wave 1 KHz, Vpp = 5V), insert X1 probe’s MCX end to CH A or CH B, and the probe to “WAVE OUT”. Check if the measurement value and the standard value are equal, calibrate if different.

Okay, I know enough about oscilloscopes to know what they mean here. I’ll translate this into English-geek for you:

Connect the X1 probe to the CH A connection, power on the oscilloscope and wait for the main screen to appear. Remove the probe’s cover to reveal the bare tip, putting this into the center of the  “WAVE OUT” port. Press Key 4 until the side menu is selected then use NAV 2 to select V1 from the options. Use NAV 1 to adjust the horizontal line until it coincides with the top part of the square wave, noting the voltage—as now measured—at the bottom of the screen. If this voltage is different than the reference 5.0V from the signal generator, then calibrate the meter by following these steps…

etc

At least that is the standard routine on a full-sized oscilloscope. I guess what I’m trying to say here is that the online PDF and tri-card documentation are pretty laughable and aren’t enough for the average person to learn how to use it.

On-screen Menu

The menu is pretty difficult so far. It’s clear that NAV A and B are used in selecting different values and moving from one place to another. K4 appears to move between the top set of menus to those down the right side of the screen.

Progress

After two full evenings playing with the interface, I’m beginning to understand some of the strange logic. Some of the hidden functionality is found when you press down on either the NAV A or NAV B sliders. It’s lost on the average person that these left/right sort of controls actually can be pressed as well. This opens up the missing features which were formerly lost on me.

So now, I can put an output wave on the screen (CH A–inserted probe to WAVE OUT), adjust the signal to a square wave of 20 microseconds in width, add a single reference voltage V1, hide V2 (and Channels B/C/D), adjust the T1 and T2 reference lines to match up to the waveform’s leading/trailing edges and then reference the delta at the bottom of the screen. Given the complexity of this as compared to the absence of a working manual, I’d call that rocket science.

The next step will be to attempt to calibrate it with a known good 5V power supply which I’ve just adjusted, having measured that with a good-quality multimeter.

Thoughts

I’m torn between moving ahead now with my own work and writing a useful how-to manual for this oscilloscope. It’s a shame that someone’s not written a good tutorial yet for this.

Update

And of course, I began working on rewriting a useful manual for this.

too much fun

My two packages arrived today at the post office so I just hauled in all the loot from this earlier post in which I’ve purchased some new toys.

Raspberry Pi Zero W

The photos from their website don’t really describe how truly small this computer is now. They’ve somehow managed to stack the RAM on top of the microprocessor to save space. As I’ve apparently ordered the wrong video adapter cable, I’ve got a trip over to Best Buy Frye’s Electronics this evening so that I can sort that one out. I need a female HDMI to DVI, in other words. Otherwise, I’m still pretty stoked. Since there’s only one micro-USB I think I’ll temporarily need a small USB hub while I’m at it.

PiZero

NeoPixel Ring

This arrived as well, all four of the segments but it was lost on me that I’ll need to solder each of them together. Fortunately, I have a soldering iron here somewhere. :looks around: I’m certain of it.

COZIR CO2 Sensor with RH/Temp

And in the other relatively BIG package is the relatively small sensor package. No wonder they charged me $21.88 to ship this to me. Seriously, it weighs about an ounce.

And it looks like I’ll need a 2×5 jumper to attach this over to the Raspi, with a solder-able header for that, too.

Update 1

Alright, I’m back from Frye’s with a handful of stuff and I’m back in business. The video adapter allows me to see what’s coming out of the Raspberry Pi Zero W and the micro-USB hub allows me to hook up a keyboard and mouse to talk to it locally. A first install with the Raspbian Jessie Lite image resulted in a terminal-only configuration (I must have been in a hurry and didn’t read the differences on their page) so a second install of Raspbian Jessie with Pixel was just what it wanted: a full desktop experience.  If I get some time this weekend I’ll try to have it talk to either the sensor or the light ring.

Update 2

I just managed to solder together the NeoPixel ring. Due to the size of the electrical pads on the ends of these, I’d suggest that this falls into the catagory of advanced soldering and not to be taken on by the average person.

NeoPixel
These are not my lovely hands.

Additionally, I’d say that this feels a bit fragile in the area of the soldering joints between each quarter-circle. I’m going to suggest that anyone who incorporates one of these into their project needs to seriously think about ways of making this more stable/reliable since the soldering joints between them are tenuously-small.  (Imagine three distinct electrical connections across the tiny width of this thing.)

What I also found is that there isn’t anywhere to clamp a hemostat for soldering these jumpers since the LEDs run all the way to the end where the connections should go.

I did add an inline resistor as Adafruit suggested to lower the input voltage or perhaps to lower start-up voltage spikes.

I managed to re-purpose a nice external 5V switching power supply that should drive all the LEDs nicely. It was left over from the supercomputer project when I swapped in a USB-based charger instead for that. Amazingly, Adafruit suggests that those 60 LEDs need a whopping 3.6A of power to drive them. I’m guessing that reality is more like 1A but I’ll play this safe. Per Adafruit’s suggestion I included a 1000 µF electrolytic capacitor across the output voltage to protect the NeoPixels.

VGD-60

So I’m prepped to do a final test of the NeoPixel ring for power and functionality on a standard Raspberry Pi 3 rig (since it sports an actual header). Once I’ve coded a test and verified that it works then I’ll take the soldering iron to the Raspberry Pi Zero W and wire it in with a quick-connect.

headerwire

I’ve now got the Raspberry Pi Zero W booting with just the power adapter. Note that you can rename its hostname, toggle on the VNC Server, adjust the default screen resolution to your liking and then—in the Finder program in OS X—open up a remote session to its Desktop with vnc://pi@hostname.local, for example. Or, toggle on the SSH Server and connect from a Terminal session with ssh pi@hostname.local.

Have I mentioned how awesome it is to have a fully-functioning computer for $10 (plus $6 for the micro SD)?

And now the power supply is completed and wired to the NeoPixel ring. Everything’s set for 5V DC in at the moment but I may try to adjust the input voltage down to 3.3V later for technical reasons. (The NeoPixels are designed for the Arduino and its output data voltage is 5V whereas the Raspberry Pi is only 3.3V. By adjusting the input voltage down then it makes a 3.3V data line look bigger than it is. There are other tricks like adding a 3V-to-5V data inverter chip but I’d like to avoid that one if possible.)

PowerSupply

Update 3

I’ve smoke-tested the power supply/ring combination and it’s looking good. To make things easier for this step, I’ve now setup a surrogate Raspberry Pi 3 for testing things but since I only had a leftover 4GB microSD, I was forced to use the no-desktop “Lite” Jessie version of Raspbian. But that’s now ready and I’ll likely have some time this weekend to do a basic blink test.

why do you contribute to other’s repositories?

I’m interested to hear from other open-source coders out there. I’d like to know some of your motivations for contributing to another person’s or another team’s open-source repository. Call it a social studies experiment, if you will.

1st-Person Open-source

Here, I’m attempting to answer the question for everyone: “Why do you work on your own project in a public way and sharing your source code, knowing full-well that someone may take your code or fork your project and become rich and famous as a result?”

  1. I believe that my project has some worth for others and sharing it could make the world a better place to live in
  2. Other people might help me with my project
  3. A well-rounded github set of repositories looks good on my résumé
  4. I’m not expecting to make money from doing this
  5. Since I don’t live in America, there aren’t as many opportunities so this is my way of getting some attention from potential companies there

Let me know if I’ve missed any motivations here.

2nd/3rd-Person Open-source

This one’s a little trickier for me since I’ve been a life-time coder. In the not-so-distant past I was well-paid for working on software projects and have watched the coding salaries and the availability of programming gigs all erode.

The next question then for everyone: “Why do you work on someone else’s project in a public way, fixing their bugs and adding features, knowing full-well that some else may become rich and famous as a result?”

Case study – Github: Bloomberg reports that they recently brought in another $100M in venture capital based upon the Enterprise-level private repository revenue they’re currently earning. They’re currently valued at US$2B.

  1. I really like the other project’s code (let’s say, the Atom editor), believe in it and want it to be more awesome than it already is; since I use it myself, I’m getting something from the collaboration
  2. I want to work on a big project but I can’t otherwise get a job in a software development company so this is the next best thing; I’m getting the experience working in a software development team
  3. “Many hands make light work”; it feels good to help others; karma; “what comes around, goes around”…
  4. As a new programmer, I don’t have enough experience to start my own project yet
  5. Since I don’t live in America, there aren’t as many opportunities so this is my way of getting some attention from potential companies there; I might get hired by doing this

If I’ve missed any of your own motivations for coding on other people’s/team’s open-source projects, please add a comment here.

Some Thoughts on the Open-source Subject

What’s strange is when you have an entire team of people spread all over the planet, they’re working together on a project started by one guy (let’s say), time goes by, the project goes viral and then suddenly one day that “one guy” gets $250M in venture capital (like in the case of github). It’s valued at US$2B at the moment, btw. That’s about the same value as the New York Times.

I wonder if the investment companies realize that for the average open-source “company” this means that 1) they’re not necessarily incorporated, 2) they probably don’t have an office nor even a business checking account, 3) and anyone can fork the collection of code and start their own Atom-knockoff project if they wanted to.

And what happens to all the people whose free labor went into making github who they are today? Do they get a share of the money? No, they don’t. Do they get a job? Possibly, I suppose it all depends upon that original guy. But at this point, the power has greatly shifted from what it was before (more of a democratic society) to what it is now (more of a capitalistic corporation).

The siren call of open-source is a world which is free from capitalism. But what seems to happen is that these big projects are becoming exactly that, the thing these coders hated in the first place (or so it would seem). Open-source is supposed to be a culture. So why is it turning into nothing more than a first step to becoming a (funded) software development corporation in the end?

the fun never ends

Pretty stoked about my recent orders from the glorious interweb-of-stuff yesterday. Because, obviously, five Raspi’s are never enough for one coder.

Raspberry Pi Zero W

w00t. It’s a single-core version of, say, the Raspberry Pi 3 as if it were stolen, driven to a chop-shop in east Los Angeles and then people ripped off things like the RJ-45 port, the four full-sized USB ports, the header, half the RAM, etc. So it’s definitely stripped-down by comparison.  Looks like the HDMI connector and the two USBs are now their tinier counterparts. I don’t see an audio jack. It still has Bluetooth.

The ‘W’ model (up from the Zero) now includes embedded wi-fi so this ought to be killer. Best of all, it only costs $10 compared to $35 for the Raspi3. Too bad it’s twice the price of the Zero, however. And at 2.6″ x 1.2″ it’s smaller than the ones I’ve had to-date.

Raspberry Pi Zero W

zero-wireless

What will I do with this? It may very well go into the aquarium project I’m working on.

NeoPixel Quarter-Ring 60 LEDs

I also ordered four quarter rings of NeoPixel(s) to build a lighting rig for the ecosystem-pi project.

NeoPixel

The intention is to apply realistic lighting to a closed-system aquarium project throughout the day, adjusting the total lighting to compensate for the measured CO2 levels inside. Basically, the more light, the more plant growth, the more O2 produced and the more CO2 consumed in the process. There becomes a point where too much CO2 is bad for the shrimp so you don’t want to stress them out. And then too little CO2 stresses out the plants.

Digital CO2 Sensor

I was able to find a CO2 sensor for the Arduino which could be tweaked for use in a Raspberry PI project. This particular model also includes relative humidity and temperature for better logging.

COZIR Ambient carbon dioxide sensor with RH and temp

CO2_RHT-ambient_sensor_large

The Project

So far—since I don’t have any sensors, LED lights and such yet—I’m stuck with the GUI design for the interface at this point and making sure that the shrimp are happy.

ecosystem-pi.png

Everything in the interface is mocked-up right now but it ought to be fun to get the Raspberry talking to the sensors and adjusting the lighting from programmatic control. A fair bit of research has been done so far in the areas of aquarium and plant health.

But the two shrimp seem happy and have cleaned completely the two plants of their week’s worth of algae in three day’s time.

how cool is electron?

I’ve been working the past couple of days with Electron, a Node.js cross-platform desktop app tool which uses JavaScript, HTML and CSS to create what look like native OS-style applications for Windows, OS X and Linux.

electron_atomelectron

Cool stuff, indeed. Out-of-the-box, it looks like you publish your Electron-based app like you would anything on github:

git clone https://github.com/Somebody/Repository.git
cd Repository
npm install
npm start

But there’s also a way of downloading OS-specific images and then adding your own app into this subdirectory structure. The result is a stand-alone EXE and folderset which reasonably looks like a drop-in replacement for something you normally would build locally using Microsoft Visual Studio perhaps. In this version though, you’d run Electron.exe but there are instructions on their website for renaming your application, updating the icon’s, etc.

I’ve just used it today to build a basic music player. I wouldn’t say that the layout is as responsive as a typical mobile app’s ability to move content but I did tweak things so that it can squash down to a mini-player and it stills looks great.

mplayer

I can thank KeithIG/museeks for the open-source code behind this. They have several OS-specific downloads available if you don’t want to build this yourself.

Pros

  • This allows you to build cross-platform desktop apps in much the same way that you’d use Adobe PhoneGap, say, to build for mobile apps.
  • You code in the familiar HTML/JavaScript/CSS trilogy of disciplines and it’s Node.js centric. It is also React.js-friendly, as I’m finding on this project.
  • So far, it seems to be well-behaved.
  • If you don’t want others to easily see your code, there’s a step where you can use asar to zip-up everything into a tidy package.
  • I didn’t have to digitally-sign anything like you might have to for a Windows 10 application or for OS X, say.
  • For people who have git and npm, the install is as easy as anything you’ve seen in the open-source space and a familiar workflow.

Cons

  • Currently, I don’t see any support for mobile platforms.
  • The complete foldedset comes in a 216MB which strikes me as a little big for what it’s doing.  The app itself for the music player weighs in at 84MB of this so the remainder is everything that Electron is doing to present all this.
  • You would need to setup three different build sites to maintain a specific download for your own app.  (It’s not like PhoneGap in which you just submit the common code and Adobe builds it in the cloud.)
  • Given that you’re not digitally-signing your code, you might have to talk your users through the hurdles of having the user “trust” the content within their particular OS.
  • This might be so popular soon that none of us can really afford to just use Electron.exe by default to serve up our app; we’ll need to rename it before publishing, in other words.

Overall

I can see myself wanting to really learn this one deeply. It has a lot of potential for delivering a more native-app experience for users.

despicable me—themed supercomputer

I gave a talk on Tuesday to an eager group of 155 attendees at the monthly SanDiego.js meetup on the topic of “Supercomputing in JavaScript”. I had an opportunity to show the new Raspberry Pi 3 supercomputer which I’d built and took it through its paces.

I think they mostly loved the audio events for assembling the minions and sending them to bed (shutting off the remote nodes). There was just enough time to also show the obligatory “Hello, Minions!” demo program to exercise the Message Passing Interface. I received a wide variety of questions and compliments from the group. And of course afterward, everyone who owned a Raspberry Pi came over to discuss their own projects, which was cool.

Here’s the PowerPoint presentation from that talk, in case you’re interested.

e-mc2 repository with step-by-step instructions