talking at the speed of lightning

I give so-called “lightning talks” at San Diego JS, a four-times-per-month local group on Meetup.com. Each talk only lasts five minutes so there’s time for several speakers within the span of a single event.

The venue is typically packed. Here’s a photograph of a typical turnout—there were about 120 attendees this month alone.

I suppose you can communicate a lot in a mere five minutes. It is a bit challenging to try to distill down all the things you need to say into this timeframe. There’s really no room for story-telling, just tell the straight facts and details as you race through your slides and screenshots and nothing more. At best, you can hope that someone will ask a relevant question which may allow you to go into some detail you’d earlier hoped to have included.

Challenges

Many of my projects involve more than one computer. Unfortunately, the security settings on most wi-fi routers at venues like this don’t want you to connect from one computer to the next. The router would actively prevent your demo from working. So I’ve learned to bring along my own networking, which is a hassle. This is especially difficult with IoT projects, for what it’s worth.

Another challenge is related to power. It seems like each of the speakers needs to setup prior to the event and so they all want to bring along their power adapters and plug in. This means that the venue would need to accommodate all those brick-style adapters and they usually forget this.

And I suppose, a recurring problem is that of screen resolution compromises that you have to put up with. You will have formatted all your screens for one resolution while creating your content, only to find that you’re now presenting in a smaller resolution. This then threatens to clip off content or the font size is now too small to be seen by those near the back.

Regardless, it’s a rewarding experience and I hope to give more talks in the months to come. I would encourage others to do the same. It’s a great opportunity to give back to the community of like-minded coders.

Advertisements

hacking agar.io, part 5

I guess now anyone who’s been following will also want to a chance to play Agar.io without ads. Here are the step-by-step instructions.

Note: Throughout, I’ll use 1.2.3.4 as the IP address of the DNS server you’ll be creating. Assume that every time you see this, you’ll be substituting your own server’s private IP address. Any other IP address you see should be typed in exactly as I’ve shown.

I’ll be including instructions for two different DNS servers. Choose the one that makes more sense for you based upon your experience.

Node.js DNS server version

Since I like JavaScript, here’s a Node.js implementation which may be augmented to include a nice HTML administrative interface if you’d like. I haven’t gotten quite that far yet but you can see what it takes to host a DNS server and a webserver all in one application.

  1. I assume that you already have Node.js installed, as well as npm and the express-generator. If not, you’ll need to install each first.
  2. Open a terminal
  3. Change to your home directory and optionally, change into a subfolder like ~/Sites like I did. Create one if necessary with: mkdir ~/Sites
  4. Run the express command to generate a new project:  express one-trick-pony
  5. If that ran correctly, change into the newly-created folder:  cd one-trick-pony
  6. Run the npm command to install the dependencies:  npm install
  7. Determine the IP address of your server and save this information for later: ifconfig | grep en1
  8. Run the npm command to install dnsd into your project (those are two hyphens without a space between them):  npm install dense –-save
  9. Edit the www file:  vi ./bin/www
    1. After this line var http = require(‘http’); add the indicated text seen in the block quote below
    2. After this line server.on(‘listening’, onListening);, optionally add the line:  console.log(‘Webserver running at *:3000’);
  10. Determine the path of the node command you usually use and save this information for later:  which node
    1. Run the su command to elevate into superuser (root) mode:  su
    2. Change to the working folder from before: cd /Users/yourname/Sites/one-trick-pony
    3. Run the node command giving a full path to the executable, which you found in the earlier step: ../../local/node/bin/node ./bin/www
    4. At this point, you should see that the server is running, indicating that it’s listening to two different ports:  53 (DNS) and 3000 (HTTP).
  11. From a workstation you can verify that the DNS server is running with the indicated command, noting that the server should still be logging requests:  dig @1.2.3.4 www.agar.io
  12. Now from the iPad, for example, go to Settings -> Wi-Fi -> select the i logo next to your connected local wi-fi zone -> DHCP -> DNS -> (write down everything here and save it), overwrite it with 1.2.3.4 (your server’s private IP address)
  13. Press the Home button twice and if Agar.io is running, swipe up to remove it from memory
  14. Start up the Agar.io app and verify that it logs in (even with Facebook), it works AND it no longer displays advertisements.
  15. When you’re finished, in Settings -> Wi-Fi, either “Forget This Network” your existing local wi-fi profile (re-entering your password) or manually re-enter the earlier DNS information that you wrote down from an earlier step.  Your iPad is now ready to behave like before.
  16. When you’re completely finished, go back to the server’s terminal session and press Ctrl-C to end Node and then enter the exit command to leave the su session.

Code to add into the ./bin/www file:

var dnsd = require(‘dnsd’);

function dns_handler(req, res) {
console.log(‘%s:%s/%s %j’,
req.connection.remoteAddress,
req.connection.remotePort,
req.connection.type,
req);

var question =
res.question[0],
hostname = question.name,
length = hostname.length,
ttl = Math.floor(Math.random() * 3600);

if (question.type == ‘A’) {
// Agar.io website
if (hostname == ‘agar.io’ || hostname == ‘www.agar.io’ || hostname == ‘m.agar.io’) {
res.answer.push({name:hostname, type:’A’, data:”104.20.26.122″, ‘ttl’:ttl});
res.answer.push({name:hostname, type:’A’, data:”104.20.25.122″, ‘ttl’:ttl});
}
// Facebook.com authentication
if (hostname == ‘facebook.com’) {
res.answer.push({name:hostname, type:’A’, data:”31.13.69.228″, ‘ttl’:ttl});
}
if (hostname == ‘www.facebook.com’) {
res.answer.push({name:hostname, type:’A’, data:”31.13.77.36″, ‘ttl’:ttl});
}
if (hostname == ‘graph.facebook.com’) {
res.answer.push({name:hostname, type:’A’, data:”31.13.77.6″, ‘ttl’:ttl});
}
// AmazonAWS
if (hostname == ‘prod-miniclip-v3-881814867.us-west-2.elb.amazonaws.com’) {
res.answer.push({name:hostname, type:’A’, data:”52.42.253.135″, ‘ttl’:ttl});
res.answer.push({name:hostname, type:’A’, data:”52.43.226.3″, ‘ttl’:ttl});
res.answer.push({name:hostname, type:’A’, data:”52.39.93.232″, ‘ttl’:ttl});
}
// Miniclippt.com
if (hostname == ‘mobile-live-v5-0.agario.miniclippt.com’) {
res.answer.push({name:hostname, type:’A’, data:”52.8.170.192″, ‘ttl’:ttl});
res.answer.push({name:hostname, type:’A’, data:”52.9.37.138″, ‘ttl’:ttl});
res.answer.push({name:hostname, type:’A’, data:”54.183.177.123″, ‘ttl’:ttl});
res.answer.push({name:hostname, type:’A’, data:”52.52.55.140″, ‘ttl’:ttl});
}
}
res.end();
}

var dnsServer = dnsd.createServer(dns_handler);
dnsServer.zone(‘agar.io’,
‘ns1.agar.io’, ‘root@agar.io’, ‘now’, ‘2h’, ’30m’, ‘2w’, ’10m’);
dnsServer.zone(‘facebook.com’,
‘ns1.facebook.com’, ‘root@facebook.com’, ‘now’, ‘2h’, ’30m’, ‘2w’, ’10m’);
dnsServer.zone(‘amazonaws.com’,
‘ns1.amazonaws.com’, ‘root@amazonaws.com’, ‘now’, ‘2h’, ’30m’, ‘2w’, ’10m’);
dnsServer.zone(‘miniclippt.com’,
‘ns1.miniclippt.com’, ‘root@miniclippt.com’, ‘now’, ‘2h’, ’30m’, ‘2w’, ’10m’);
dnsServer.listen(53, ‘1.2.3.4’);
console.log(‘DNS server running at 1.2.3.4:53’);

Bind DNS server version

This version will assume that you have a Linux (Ubuntu, in this case) server or workstation that can run the bind9 service.

Here, I assume that you’re comfortable with commands in a terminal, know what sudo does and can use the vi editor to edit and save a file. You know what touch does. If any of these don’t sound familiar, then this probably isn’t the option for you.

On a Linux (Ubuntu) server, do the following:

  1. Make sure that your system is up-to-date:
    1. sudo apt-get update
    2. sudo apt-get upgrade
    3. sudo apt-get dist-upgrade
  2. Install the DNS service, noting that it will take a fair amount of configuration work
    1. sudo apt-get install bind9 bind9utils bind9-doc
  3. cd /etc/bind
  4. Create four empty files, one per “forward” zone. In the next steps you’ll be editing each, making sure to substitute your own server’s private IP address in each case.
    1. sudo touch for.agar.io
    2. sudo touch for.facebook.com
    3. sudo touch for.miniclippt.com
    4. sudo touch for.amazonaws.com
  5. sudo vi for.agar.io
    1. $TTL 86400

      @   IN  SOA     pri.agar.io. root.agar.io. (

      2011071001  ;Serial

      3600        ;Refresh

      1800        ;Retry

      604800      ;Expire

      86400       ;Minimum TTL

      )

      @       IN  NS          pri.agar.io.

      @       IN  A           104.20.25.122

      @       IN  A           104.20.26.122

      pri     IN  A           1.2.3.4

      www     IN  A           104.20.25.122

      www     IN  A           104.20.26.122

      m       IN  A           104.20.25.122

      m       IN  A           104.20.26.122

  6. sudo vi for.facebook.com
    1. $TTL 86400

      @   IN  SOA     pri.facebook.com. root.facebook.com. (

      2011071001  ;Serial

      3600        ;Refresh

      1800        ;Retry

      604800      ;Expire

      86400       ;Minimum TTL

      )

      @       IN  NS          pri.facebook.com.

      @       IN  A           31.13.69.228

      pri     IN  A           1.2.3.4

      www     IN  A           31.13.77.36

      graph   IN  A           31.13.77.6

  7. sudo vi for.miniclippt.com
    1. $TTL 86400

      @   IN  SOA     pri.miniclippt.com. root.miniclippt.com. (

      2011071001  ;Serial

      3600        ;Refresh

      1800        ;Retry

      604800      ;Expire

      86400       ;Minimum TTL

      )

      @       IN  NS          pri.miniclippt.com.

      pri     IN  A           1.2.3.4

      mobile-live-v5-0.agario     IN  A   52.52.55.140

      mobile-live-v5-0.agario     IN  A   54.183.177.123

      mobile-live-v5-0.agario     IN  A   52.8.170.192

      mobile-live-v5-0.agario     IN  A   52.9.37.138

  8. sudo vi for.amazonaws.com
    1. $TTL 86400

      @   IN  SOA     pri.amazonaws.com. root.amazonaws.com. (

      2011071001  ;Serial

      3600        ;Refresh

      1800        ;Retry

      604800      ;Expire

      86400       ;Minimum TTL

      )

      @       IN  NS          pri.amazonaws.com.

      pri     IN  A           1.2.3.4

      prod-miniclip-v3-881814867.us-west-2.elb  IN  A 52.42.253.135

      prod-miniclip-v3-881814867.us-west-2.elb  IN  A 52.39.93.232

      prod-miniclip-v3-881814867.us-west-2.elb  IN  A 52.43.226.3

  9. sudo vi named.conf.local
    1. # Append this to the file:

      zone “agar.io” {

      type master;

      file “/etc/bind/for.agar.io”;

      };

      zone “facebook.com” {

      type master;

      file “/etc/bind/for.facebook.com”;

      };

      zone “amazonaws.com” {

      type master;

      file “/etc/bind/for.amazonaws.com”;

      };

      zone “miniclippt.com” {

      type master;

      file “/etc/bind/for.miniclippt.com”;

      };

  10. sudo vi named.conf
    1. # Append this to file:

      logging {

      channel query.log {

      file “/var/log/query.log”;

      severity debug 3;

      };

      category queries { query.log; };

      };

  11. Make sure that the service can read/control its configuration files:
    1. sudo chmod -R 755 /etc/bind
    2. sudo chown -R bind:bind /etc/bind
  12. sudo vi /etc/apparmor.d/usr.sbin.named
    1. # Insert this line inside the “/usr/sbin/named {” section

      /var/log/query.log w,

  13. Create an empty log file, change ownership and make sure that the service can write to it
    1. sudo touch /var/log/query.log
    2. sudo chown bind /var/log/query.log
    3. cat /etc/apparmor.d/usr.sbin.named | sudo apparmor_parser -r
  14. Verify that the configuration files will parse correctly:
    1. sudo named-checkconf /etc/bind/named.conf
    2. sudo named-checkconf /etc/bind/named.conf.local
    3. sudo named-checkzone agar.io /etc/bind/for.agar.io (repeat for other zone files)
  15. Stop/start the DNS service:
    1. sudo systemctl restart bind9
  16. Follow the instructions from step 11 in the Node.js section to verify that the DNS server is running, substituting the IP address of the Ubuntu server.
  17. As before, configure the iPad to use your server’s IP address and test the Agar.io app
  18. You can watch what the app is querying from your server, giving you insight into how many ad servers are actually involved: tail -f /var/log/query.log
  19. When you are completely finished, you may stop the DNS server:  sudo systemctl stop bind9

That’s it. I’ve described how to setup two different DNS servers which should effectively cheat the ads you’d normally see during Agar.io game play.

And now, I think I’ll settle into some uninterrupted Agar.io and all without having to unnecessarily stop the game to shutdown some long-running/buggy ad attempt (losing my earned XP points).

database app, no server-side

This is new for me. As a long-time website developer I consider myself a hardcore backend developer. For years I’ve contracted out as the guy you’d go to for the database design and subsequent server-side code to access that database. And now I find myself working on a website with a slick-looking frontend and—gasp!—no server-side coding at all.

“How is this even possible?” you ask. Even a week ago, I’d have been just as confused as you may be now.

Firebase

Fortunately, there’s a platform called Firebase which actually allows you to write a database application with no server-side code whatsoever.

Here’s a list of things you’d normally need backend code to do on behalf of activities initiated by client code (on both your users’ and your admins’ browsers):

  1. Authentication, password maintenancerights control and logged-in state management
  2. Creating database records or objects
  3. Reading from database records or objects
  4. Updating database records or objects
  5. Deleting database records or objects

It turns out that you can configure Firebase to use email/password authentication and as a result of this decision you can do your entire site design without necessarily writing any server code.

As an added benefit you then don’t have to find a hosting provider for that server-side code either. And since Firebase allows you to serve up your static HTML website then this is appears to be a win-win.

Changing your perspective

Server-centric

In other systems like Node.js, e.g., you write your application from a server-centric perspective. You might begin by creating something which listens to a particular port, sets up a router for which pages are delivered and then you setup handlers for when a page is requested or when form data is submitted to a page. Lastly, you might then write some separate templates which then are rendered to the client when a page is requested. The design approach is very much: server-side first, client-side second.

Client-centric

Firebase appears to be turning things completely around. In this case you might begin with the page design itself using something new like Google’s Polymer framework. You would focus a lot of attention on how great that design looks. But then at some point, you need to register a new account and then authenticate and this is where you’d code it from client-side JavaScript. Here, the design approach is: client look-and-feel first, client JavaScript to authenticate second.

Rendering static plus dynamic content

In the past we might have rendered pages with server-side code, merging data in with a template of some kind, say, something written in Jade. In this new version we still might have a template but it’s just on the client now. Additionally, Polymer allows custom elements to be created. If you’ve ever written server-side code Polymer does allow you to bind data as you might expect.

Page routing

The Polymer framework includes a client-side routing mechanism so that you may serve up different pages from the same HTML document. But even if you don’t use this approach then Firebase‘s hosting provider will do that for you; just create separate pages and upload them and they’ll take care of the rest.

Why you might want this

Like me, you might have built up a level of comfort with earlier approaches. I myself often think about a website design from the server’s perspective. One downside to this approach is that you possibly could end up with a website design that looks like you spent 90% of your effort on the backend code and didn’t have enough time in your schedule to make things look really awesome for your users.

By beginning your design with the UI you are now forcing yourself to break out of those old habits. You work up something that looks great and only then do you begin the process of persisting data to the database server.

firebase

This now allows you to focus on how the application will look on many different devices, screen resolutions and whether or not those devices include a touchscreen and features such as GPS, re-orientation, etc.

Google and Firebase

All of this Firebase approach works rather well with the Polymer framework and I’m sure this is the intent. In fact, there seems to be a fair bit of collaboration going on between the two with Google suggesting that you host on Firebase from their own website.

Scalability

I think one big benefit to no server-side is that there is no server-side app to scale up. The downside then is that you’ll likely have to upgrade your hosting plan with Firebase at that point and the pricing may or may not be as attractive as other platforms like Node.js on Heroku, e.g.

Custom domain

Of course, you have to pay $5/month minimally to bind your custom domain name to your free instance. I wouldn’t call that expensive necessarily unless this is just a development site for you. In this case, feel free to use the issued instance name for your design site. At this $60/year level you get 1GB of storage which is likely enough for most projects.

Pricing note

Firebase‘s pricing page mentions that if you exceed your plan’s storage and transfer limits then you will be charged for those overages. Obviously, for the free plan you haven’t entered your credit card information yet so they would instead do something in the way of a denial-of-service at that point. If you have opted for that minimum pricing tier please note that this could incur additional charges if you’ve poorly-sized your pricing tier.

Overall thoughts

So far, I think I like this. Google and Firebase may have a good approach to the future of app development. By removing the server you’ve saved the website designer a fair bit of work. By removing the client-side mobile app for smartphones then you’ve removed the necessity to digitally-sign your code with your iOS/Microsoft/Android developer certificates nor to purchase and maintain them.

All of this appears to target the very latest browser versions out there, the ones which support the very cool, new parallax scrolling effects, to name one new feature. The following illustration demonstrates how different parts of your content scroll at different rates as your end-user navigates down the page.

parallaxEffect

Since parallax scrolling is now “the new, new thing” of website design I’d suggest that this client-centric approach with Polymer and Firebase is worth taking a look.

quiā possum

Translated from Latin to English:  “Because I can

I suppose that would be the written answer to the unspoken question, “So why are you writing an English-to-Latin translation program in JavaScript?”  Google Translate is usually okay at doing rough translations from almost any language to another. One that it can’t do well is Latin. I can attest that from the size and difficulty of the material in my new copy of Wheelock’s Latin that Google would be hard-pressed to achieve any sort of quality in their own attempts. It’s simply a difficult problem, to state things plainly.

People who attend college are asked to perform a variety of tasks which may ultimately seem like a waste of time. And in the big scheme of things so is this project. I doubt if anyone scholarly would want to use my translation tool. So why should I bother with it at all? I suppose the biggest reason would be to challenge myself to do something quite difficult. Problems like this demand that you sit down and think about the organization of the task at hand. This sort of training is good for software developers because it helps to train us how to organize code. Do you parse the English sentence down to words and then throw them at something that will translate each one in turn? Clearly, this looks to be the way that Google has approached the task and in the case of Latin it is destined to fail.

“Do you parse the English sentence down to words and then throw them at something that will translate each one in turn?”

The pitfalls of idioms

As a demonstration of an idiom I recall one of my first used cars. I bought it while stationed in Germany and it had a bumper sticker that was affixed to the inside door panel on the driver’s side. Its text in German was, “Sie nicht auf in die Luft gehen…” and it was accompanied by a cartoon man who was starting to accidentally float up somehow. If you attempt to use a word-by-word translation tool like Google you’re bound to be perplexed at the true meaning of the idiom “to go up in the air” which means the same as “to get mad” or “to get ruffled”, basically. In case you’re wondering, it’s from an old cigarette commercial from television which included the cartoon character. It’s likely it’s a play on the way cigarette smoke also goes up in the air. But if you were tasked to write a German-to-English translation tool then these sorts of things would be quite difficult unless you were a native speaker of German.

And yet, Latin’s a dead language. None of us grew up speaking it so this creates a lot of unfamiliarity.

It’s all in the approach

If you did have a dictionary of idioms, it would probably be best to translate those out of the sentence first and then finally attempt to translate the remaining individual words of the sentence. The earlier thought of parsing down to individual words and translating each is prone to fail as evidenced by many online translation sites. Perhaps a better approach is to look for the largest recognizable phrases and then progressively translate down, ad infinitum, as they’d say in Latin.

“Perhaps a better approach is to look for the largest recognizable phrases and then progressively translate down, ad infinitum, as they’d say in Latin.”

And yet, I’m just getting started. Eventually, I should be able to get to the point where the analyzer can determine person, number, tense, mood and voice from the entire sentence as submitted. With complete sentences, that’s often possible. But sometimes it’s not. Take for example:

"Go!"

Unless you’re the speaker or an observer to the scene, you’re missing a critical piece of information here: the number of people being directed. Without that, you can’t translate this second-person, present-tense, imperative, active verb to Latin. Unless you have all the attributes, you can’t accurately do the job.

Decisions, decisions…

You could take two different approaches if you don’t have all the required inputs: 1) don’t translate or 2) show all possible versions based upon any ambiguity. In the second case you might translate the singular and plural second-person conjugations of “to go”.

The approach that I’ve decided to take is to simply not translate if there are any unknowns. I have added options so that the end user may hint at what’s missing and this seems to be working out for now as I test things with short sentence fragments.

Keyboard fun

Latin includes vowels with macrons over them—these indicate long vowels. That said, it was an initial challenge to actually type them in. For my Mac computer I found that it was first necessary to change the keyboard driver itself (“ABC Extended”) to allow a keystroke combination such as the following:

Option-A, a -> ā
Option-A, e -> ē
Option-A, o -> ō

It seems to produce the required characters which display nicely out to the browser as expected.

Progress

Things are coming along well. I’ve created a Node Express project to display an HTML form to accept input. I think I’ve done a good job organizing the backend code (finally) in such a way that Node’s asynchronous idiosyncrasies behavior is taken into account. My first attempts began to oddly fail when I’d coded things sequentially and Node doesn’t allow that style of code organization.

So far, so good

Latin-screenshot

I may write more from time to time about this project here. I’m sure it will keep me busy for a while.

one code to rule them all

JavaScript

Who’d have thought ten years ago that JavaScript would be so popular now?  I think we can reasonably thank Node.js, released back in 2009, for JavaScript’s enduring popularity.  It’s no longer just a browser client validation tool from its earliest use, it’s a full-blown programming language that’s reached maturity.

Officially, JavaScript’s been on the scene since 1995, over twenty years ago.  The original version was written in ten days.  It even appeared the same year as server-side but didn’t really take off as a backend coding tool until recently.  It wasn’t until Node.js’s asynchronous methodology that it could truly find its place in mainstream coding.

Standardized JavaScript

Fortunately for all of us, Netscape submitted the proposed JavaScript standard back then to Ecma International to formally get the language blessed as a standard.  Microsoft’s own version differed slightly at the time.  Having an unbiased third-party like Ecma bless the standard would allow the rest of us some relief in the browser wars that were going on among the big payers in this space.  Time has passed and we now anticipate the sixth formal JavaScript specification from Ecma to be implemented by the various browsers:  ECMAScript 6, also known as ES6 Harmony.

JSON

JavaScript Object Notation (JSON) is a useful standard for transferring and storing data.  It’s biggest competitor in this space is probably XML and its many subsets as a means of storing and identifying data.  They’re both similar in that they store data that’s marked up with the field names.  And yet they’re different in the way that markup occurs.

JSON’s popularity now is almost wholly due to Node.js’s domination of the playing field.  It’s simple to open and use JSON data within JavaScript and since Node is the platform of choice, JSON can’t help but be the favorite storage and transfer format.

Node.js

I could reasonably assert that there are two types of coders out there:  1) those who haven’t used Node.js yet and 2) those who love it.  It’s an awesome concept.  Write code in JavaScript and use Node to spawn (run) it.  Node manages an event queue for you and deals with what happens when some of your code takes longer than it should (“blocking calls”). You can create an entire webserver app within a few minutes with Node and since JavaScript is such a well-known language among coders, the comfort level of the created code is higher than for alternate languages choices that are available.

“There are two types of coders out there:  1) those who haven’t used Node.js yet and 2) those who love it.”

With other languages and development platforms you scale it up by breaking your code into multiple threads of execution.  And in those other languages you have to manage inter-thread communication and timing.  In the Node.js world, though, you scale your app by having something bring up another instance of your main app itself.

Hosting a Node.js App

This new model of scaling matches nicely with a variety of cloud virtual computer providers such as Amazon and Microsoft.  Even better, a secondary market of Node.js platform providers like OpenShift and Heroku provide a space for your application to be hosted.  (Originally, you would have to create a virtual computer at Amazon, for example, install all the dependencies to run everything and then add your Node.js app.  But now, a provider like Heroku assumes that you have a Node.js app and they take care of the prep-work for you.)

If you haven’t already done so, check out Red Hat’s OpenShift website as well as Heroku.  Both offer a (typically) free tier if you accept the scalability defaults.  Both work quite well for hosting a Node.js application.  I would say that both sites offer good Getting Started documentation.  I will say that I found the Heroku site to be slightly easier as a beginner.  I’m currently hosting one Node.js app on each of them and am happy with both providers. Note that if your app needs additional “always on” (also known as “worker”) apps then you need to fully understand each provider’s pricing model before getting settled into either arrangement.  You might easily incur an approximately $50/month fee for such an app.  Otherwise, the base scalability of both providers is essentially free.