android os

I got tantalizingly close with Android OS on this attempt, manually creating my own USB install disk with the very good UBootin open-source software. I can see now that the folks from Remix OS had customized both code bases for their own use in an attempt to make things easier but fell short, it would seem.  UBootin appears to allow you to create almost any sort of live/install USB drive and well worth the time spent with it.

I managed to “live boot” (without installing) Android OS from my USB drive only to find that the Google Play Service appears to be crashing on the Dell Vostro 200 upon initialization, a known bug. The software appears to expect that 1) I have cellphone service, 2) I have wi-fi. I have neither and it doesn’t seem to know how to drive forward to the point of using DHCP on my Ethernet adapter to continue.

I’ll check the documentation to find out how I might get further but this is the best attempt so far within the Android OS—compatible collection.

Eureka…?

I’ve finally gotten the live boot to work by sneakily removing the Ethernet connection in order to get past the Google Play Service screen. It has an interesting interface that looks a lot like a cellphone might. So I’ve decided to boot again and actually install this time.

I’ve managed to navigate around the interface a bit during the live boot session. Oddly-enough, you get to a terminal screen with the Ctrl-Alt-F1 screen and back to the GUI with Ctrl-Alt-F7. There’s a very thin setup of UNIX under the covers and some familiar commands if you are savvy to such things.

It appears to be a little heavy-handed with the processor fan control, in my humble opinion. There are many times when the fan is adjusted to full while it’s doing anything. For example, it’s blaring away while presumably formatting the first drive.

The installation process is shy on status. I couldn’t honestly tell you how much of the drive is formatted at the moment, for example. In old Western movies the Indian scout would put his ear to the ground in order to hear distant horses. Here, I put my ear to the side of the chassis in an attempt to hear whether or not the drive is being written to. Color me “worried”.

If At First You Don’t Succeed…

Okay, that didn’t work. So I reboot, reformat but this time without GRUB. It now appears to be going further.

Android OS starts up at least. I does still throw an error that Google Play service is stopped, like before. Like before, I need to temporarily disconnect the Ethernet cable to get past the same bug as seen during the live boot attempt.

Finally booting from the hard drive results in Error: no such partition, entering rescue mode, Grub rescue>. Really?

I think we need to vote Android OS off the island as a viable solution.

And yet…

I just keep imagining that this will work. So now I’m trying again with the previous (perhaps more stable) version of Android OS x86 5.1 rc1. This version plows right through the previous installation bug involving the Ethernet adapter, always a good sign. And the browser actually comes up without crashing, another bonus.

Finally, I’ve managed to find a working version of an Android OS for a PC. I’ll continue my review in depth and follow-up with what I find.

phoenix os

Continuing the testing of replacement operating systems I next attempted to install Phoenix OS of Beijing Chaozhuo Technology Co. Unfortunately that failed during the multiple attempts to get the installer onto the USB drive. I’m not sure what format they’re looking for but it errors out with a message complaining about NTFS partitions. I attempted using the FAT and FAT32 styles but neither made the installer happy.

Again, it looks like we’ll need to pass on another Android-like operating system for PCs.

windows 10 “free upgrade” is over

Bummer. We had a year to upgrade from Windows 7/8 to the latest Windows 10 for free and we missed it. It’s not because we’re lazy. Sometimes it’s just because I.T. is under-funded and it literally takes all of our time to do other things. A fraction of those workstations only had 2GB of RAM, for example, and couldn’t be updated. And sometimes you have a collection of Windows XP computers that didn’t quality for the update.

So here you are, August 2016 and you have several aging computers that may or may not be worth the $$ to pay for the Windows 10 license. If you’re anything like me then you review alternatives.

MaaS (metal-as-a-service)

One very cool option that you can do with a pile of old Dell Vostro 200 workstations is to convert them into a private cloud. I recently did just this. Imagine a rack of computers—all without monitors/keyboards/mice—and they all do something you rarely see: they boot over the network via Ethernet to pull down an image you’ve setup. Once you’ve set everything up it’s wonderfully automatic. The name of the collection of services is called Openstack.

What’s even better is that once each node has fully provisioned itself with an image, it goes to sleep, turning itself off. And then the cluster control can wake it up remotely over Ethernet and it goes to work again.

And the best of all is that the entire thing is free from a software standpoint. (Free is good.) Note that for the default installation you’ll need a spare Ethernet hub/switch, one of the computers needs to have two Ethernet adapters and at least five of the computers will need double hard drives. Since I had so many spare computers I just cannibalized where necessary.

If you’re interested in reviewing this as well, check out this link on Ubuntu’s website. Once you’re finished you’ll have a system in which you can spin up virtual computers and allocate them as you wish. You may securely remote into these virtual computers using the putty software client. If you’ve configured it to use public IP addresses you can even publish websites, for example.

The version that I reviewed was a few back from the current release and hopefully everything is much more stable now. I noted then that it wasn’t quite ready for “prime time” but I’d guess that it’s ready to go by now.

Ubuntu Server or Desktop

Even if you don’t go all the way and create a private cloud you can always just install the free Ubuntu operating system as a server or a desktop computer. It seems to be a very usable collection of well-maintained code. Canonical is the company behind this effort.

Remix OS

And today I’m trying something I’ve just discovered called the Remix OS for PC. It’s essentially the Android operating system for smartphones, just setup especially for a standard computer. Jide Technology appears to be the underlying developer.

At the one-hour mark: Things looked good for the first fifteen minutes or so of the installation. Unfortunately, after an hour I would guess that it’s possibly stuck. The Dell Vostro 200 appears to have an acceptable graphics adapter (Intel GMA 3100) and yet I still don’t have a full installation yet. Since the status light on the USB drive does still randomly blink perhaps it’s just taking a very long time. I’ll not interrupt it and see what happens.

At the two-hour mark: I’m still staring at the same pulsing Remix OS logo. The status light seems to indicate that progress is still happening or so I’d hope.

End-of-day: At this point I think I’m going to just let it run all night if it wants and see if it’s finished in the morning.

a new app pricing model

Personally, I don’t like the idea of “in app purchases” which seems to be the norm these days in the current app pricing model. You get the app for free, try to use it and then find out that you can’t save your results unless you pay the author for this feature. If you’re like me then you usually feel like they’ve wasted your time.

I would suggest that a better model is how we now buy music.

Changes in the recording industry

You used to have to purchase the entire album just to get a single song, that was just how things were. If an artist or band had one killer song that was enjoying a lot of airplay on the radio and assuming that you really wanted that song then you had to pay the $12 or so for the entire album. And you just crossed your fingers that one or two of the other songs made it worthwhile.

With the advent of iTunes and similar websites, we now have the ability to sample and purchase exactly which songs we want to pay for. If half of the album isn’t worth it, you don’t have to buy all of it. If the artist only has one good song then there’s no reward for them to pad the album with a lot of junk.

A new model for app pricing

So why shouldn’t we just show our app’s prices up front instead of hiding them inside? Currently, a new customer can’t see how much of the app is crippled and how much is functional until it’s been downloaded and used. How many times have you downloaded and demo’d two or even three different free apps of the same kind, trying to find one that was reasonably useful?

What we need is a venue for selling our apps like musicians sell their songs. Theoretically, it might look something like this:

PayAsYouGo_734x466.png

More feature transparency

The biggest benefit to a model like this is that it shows the potential customer what’s included in the full program and the cost of each feature. If, like me, they’re not interested in the social connector feature then they simply don’t purchase it. You pay for what you need and nothing more.

As developers, we would distribute modules of functionality and charge the user on a per-module basis. I would suggest that features be priced differently based upon the perceived value. In fact, there’s nothing to prevent the price of a popular feature from increasing over time.

Like in the iTunes model, clicking a play button next to the feature ought to bring up a demo or screenshot of the feature in action.

database app, no server-side

This is new for me. As a long-time website developer I consider myself a hardcore backend developer. For years I’ve contracted out as the guy you’d go to for the database design and subsequent server-side code to access that database. And now I find myself working on a website with a slick-looking frontend and—gasp!—no server-side coding at all.

“How is this even possible?” you ask. Even a week ago, I’d have been just as confused as you may be now.

Firebase

Fortunately, there’s a platform called Firebase which actually allows you to write a database application with no server-side code whatsoever.

Here’s a list of things you’d normally need backend code to do on behalf of activities initiated by client code (on both your users’ and your admins’ browsers):

  1. Authentication, password maintenancerights control and logged-in state management
  2. Creating database records or objects
  3. Reading from database records or objects
  4. Updating database records or objects
  5. Deleting database records or objects

It turns out that you can configure Firebase to use email/password authentication and as a result of this decision you can do your entire site design without necessarily writing any server code.

As an added benefit you then don’t have to find a hosting provider for that server-side code either. And since Firebase allows you to serve up your static HTML website then this is appears to be a win-win.

Changing your perspective

Server-centric

In other systems like Node.js, e.g., you write your application from a server-centric perspective. You might begin by creating something which listens to a particular port, sets up a router for which pages are delivered and then you setup handlers for when a page is requested or when form data is submitted to a page. Lastly, you might then write some separate templates which then are rendered to the client when a page is requested. The design approach is very much: server-side first, client-side second.

Client-centric

Firebase appears to be turning things completely around. In this case you might begin with the page design itself using something new like Google’s Polymer framework. You would focus a lot of attention on how great that design looks. But then at some point, you need to register a new account and then authenticate and this is where you’d code it from client-side JavaScript. Here, the design approach is: client look-and-feel first, client JavaScript to authenticate second.

Rendering static plus dynamic content

In the past we might have rendered pages with server-side code, merging data in with a template of some kind, say, something written in Jade. In this new version we still might have a template but it’s just on the client now. Additionally, Polymer allows custom elements to be created. If you’ve ever written server-side code Polymer does allow you to bind data as you might expect.

Page routing

The Polymer framework includes a client-side routing mechanism so that you may serve up different pages from the same HTML document. But even if you don’t use this approach then Firebase‘s hosting provider will do that for you; just create separate pages and upload them and they’ll take care of the rest.

Why you might want this

Like me, you might have built up a level of comfort with earlier approaches. I myself often think about a website design from the server’s perspective. One downside to this approach is that you possibly could end up with a website design that looks like you spent 90% of your effort on the backend code and didn’t have enough time in your schedule to make things look really awesome for your users.

By beginning your design with the UI you are now forcing yourself to break out of those old habits. You work up something that looks great and only then do you begin the process of persisting data to the database server.

firebase

This now allows you to focus on how the application will look on many different devices, screen resolutions and whether or not those devices include a touchscreen and features such as GPS, re-orientation, etc.

Google and Firebase

All of this Firebase approach works rather well with the Polymer framework and I’m sure this is the intent. In fact, there seems to be a fair bit of collaboration going on between the two with Google suggesting that you host on Firebase from their own website.

Scalability

I think one big benefit to no server-side is that there is no server-side app to scale up. The downside then is that you’ll likely have to upgrade your hosting plan with Firebase at that point and the pricing may or may not be as attractive as other platforms like Node.js on Heroku, e.g.

Custom domain

Of course, you have to pay $5/month minimally to bind your custom domain name to your free instance. I wouldn’t call that expensive necessarily unless this is just a development site for you. In this case, feel free to use the issued instance name for your design site. At this $60/year level you get 1GB of storage which is likely enough for most projects.

Pricing note

Firebase‘s pricing page mentions that if you exceed your plan’s storage and transfer limits then you will be charged for those overages. Obviously, for the free plan you haven’t entered your credit card information yet so they would instead do something in the way of a denial-of-service at that point. If you have opted for that minimum pricing tier please note that this could incur additional charges if you’ve poorly-sized your pricing tier.

Overall thoughts

So far, I think I like this. Google and Firebase may have a good approach to the future of app development. By removing the server you’ve saved the website designer a fair bit of work. By removing the client-side mobile app for smartphones then you’ve removed the necessity to digitally-sign your code with your iOS/Microsoft/Android developer certificates nor to purchase and maintain them.

All of this appears to target the very latest browser versions out there, the ones which support the very cool, new parallax scrolling effects, to name one new feature. The following illustration demonstrates how different parts of your content scroll at different rates as your end-user navigates down the page.

parallaxEffect

Since parallax scrolling is now “the new, new thing” of website design I’d suggest that this client-centric approach with Polymer and Firebase is worth taking a look.

when did favicon get so needy?

Most of us probably end up working on favicon.ico as an afterthought, say, after you’ve finally shown the work to someone else and you see that the default icon is in use. Up to this point in the project you’ve likely considered this as work to do later.

For those who don’t know this already, this is the icon file which was displayed in Internet Explorer just to the left of the URL in the Location field. At one time it was only 16×16 pixels and enjoyed very little screen real estate, if you will. The file if present in your website’s root directory would be pulled by the browser. If you were an early website implementer like myself your discovery of the concept was when you were reading the website’s log files and kept seeing all the "404 - file not found" errors from newer browsers which now expected it. So you likely created one to clean up your log files and shrugged it off.

Platform Wars

Fast-forward to today and there are many popular browsers as well as platforms. Perhaps the biggest change has been the advent of touchscreen platforms that demand much larger buttons in which to invoke your shortcut. Bigger buttons means that our earlier one-size-fits-all approach no longer works; you can’t scale up a 16×16 icon graphic to the huge sizes required for a web TV platform.

Unfortunately, nobody came up with a consistent standard for this. What would have been nice would have been for everyone to populate a folder structure like this:

    /icons/favicon.ico
    /icons/icon_16x16.png
    /icons/icon_32x32.png
    /icons/icon_256x256.png
    /icons/touch_256x256.png

And done. In this perfect world these would be the only icons and tiles available for the sum total of all browsers, all platforms, all devices. Period. If an operating system or browser needed something different, if would be necessary for that system to read the closest available graphic and to then create something else from it as required. It should not be up to the website designer to create what is seemingly required today.

Microsoft Internet Explorer

Microsoft originally specified /favicon.ico to include one or more images. It could contain a 16×16, 32×32 and/or a 48×48 pixel version. Although the first size is good enough for the location field of the browser, if the end-user minimizes the browser then the 16×16 doesn’t seem big enough for the taskbar within Windows. Alternately, creating a shortcut to the website under some screen resolutions and settings requires an even bigger default icon, hence the three sizes.

Speaking of which, what format is a .ico file anyway? Even though Microsoft has a tool within Visual Studio to combine multiple files into an .ico file, you can actually get away with just renaming a .gif, .jpg or .png file to favicon.ico and it will work fine with most browsers.

Mobile Platforms

It sounds like these need .png icons, one or more Apple Touch Icons, Windows 8 tile icons and a /browserconfig.xml file.

Google TV

Google TV wants an icon that’s 96×96 pixels.

Android Chrome

Chrome wants an icon that’s 196×196 pixels.

Opera Coast

Coast wants an icon that’s 228×228 pixels.

Apple Touch Icon

The iPhone/iPad collection of devices want an icon which is between 57×57 to 180×180 in size. The newer the device or in the presence of a Retina-technology screen, the higher the resolution it will need.

The odd thing here is that non-Apple browsers and platforms sometimes use the Apple-named icons because they’re usually better resolution than the default one.

Microsoft Metro Interface on Windows 8 and 10

You’d think that the 150×150 tile graphic would need to have that resolution but Microsoft recommends 270×270 for this one. Er, what?

Web Apps

And now, just when you thought this couldn’t be any more complicated, /manifest.json might also be necessary and it serves a similar function as /browserconfig.xml.

Finding Some Sanity In All the Chaos

The current wisdom appears to involve providing two and only two files: a 16×16 pixel /favicon.ico and a 152×152 pixel /images/apple-touch-icon.png. You’d then reference the latter with a tag within your HTML. The .ico version can include multiple resolutions inside it—the more, the better.

As developers I think we should push back a little and to make platform designers accept this approach and to gracefully work if this is all that’s available.

github

I suppose most of us these days have a github repository and this blog post would be the obligatory mention of mine.  I try to work on phone apps these days (PhoneGap) and Node.js backends every chance I get.

I can’t say that I’m very active on the open source side of things.  I only manage to throw down another version about once per week or so.  I find myself fairly ignorant of what goes on with respect to active multi-person open source projects—I only really have experience in bigger Microsoft programming shops on a Visual SourceSafe repository, for example.  And I have also used Microsoft Team Foundation Server as served up via the VisualStudio.com website from the Visual Studio program itself.

Outsource Guru on Github

I may start putting some of my older projects on there, time will tell.  I tend to select things that have a tutorial component to them so that I can demonstrate to others an intro to something.

Would I add a library to github?  At the moment, I don’t think so.  Libraries are useful to other open source coders but they’re also usable to corporations who are well-financed.  I’m usually paid to code so I think I’d like to be careful what I put out there for free.

A better github paradigm

What would be nice is if you earned credits with github for every download of your code.  And then you could spend these credits by paying for your own downloads on there.  Any individual without credits would need to buy them with money first if they wanted to download code.  If you could buy a credit for $1 then in theory you ought to be able to sell a bulk of credits for $0.75 each.  Github themselves would earn the difference since they’re hosting the platform itself.

And yet, it’s the norm these days for an average open source program to be made up of other open source code as dependencies.  So the crediting scheme would necessarily need to pay out fractional royalties to the people who created those dependent portions of code.  And so, if your own code is made up of 90% of other people’s code you would only see 10% of that credit and the rest would be distributed to them.

The benefit to a system like this is that a coder like myself—who’s used to getting paid to do this—gets paid for doing this.  Anyone who downloads code has to pay a credit from the balance on their account.  Having downloaded code they’re free to then use it.  And if they then re-bundle someone else’s code into their own then it’s become part of a commissioning scheme and everyone gets paid for their effort.

“The benefit to a system like this is that a coder like myself—who’s used to getting paid to do this—gets paid for doing this.”

An additional benefit to this system is that it no longer rewards the corporations who get a free pass to download unlimited code at your expense.

So the new system would work like iTunes, perhaps.  Maybe you could buy a card in a retail store with credits and redeem them on the site.  But if you created an account and uploaded a popular library then you could start earning credits almost immediately, in theory, anyway.

Adobe PhoneGap

History

In 2011, Adobe purchased an existing mobile application development framework by Nitobi, branding it at that time with the name PhoneGap.  They’ve since released it into the open source arena as “Apache Cordova” although many of us still refer to it as PhoneGap.  If you’ve ever attempted to create native iOS, Android or Microsoft Phone applications then you’ll enjoy this new multi-platform approach.

Before PhoneGap, you’d have to install a development kit for each of the various platforms you wanted to target.  And then, you’d need to learn the platform language of each major player:  Objective-C or Swift for iOS, Java for Android and XAML with C# for the Microsoft Phone.  Good luck trying to then design and maintain three completely-different collections of phone app code which hope to provide the same functionality.

Today

But now with PhoneGap, you use what you probably already know:  HTML, JavaScript and CSS.  You then either 1) compress your collection of files using a ZIP file and upload this to Adobe’s website or 2) use the popular github.com repository to manage your software changes and then tell Adobe the location of your github.

I’ll add to this that jQuery Mobile does a great job of streamlining some of the work you can do with PhoneGap.  It includes both methods for interacting with the browser’s DOM and a nice collection of CSS for displaying and lining up the widgets your app will need, for example, phone-styled push buttons that are sized correctly for fingers.

Develop

An initial set of app code is created from a command-line interface, producing a collection of files you’ll need for your app.  You’ll usually focus on two files within this collection:  config.xml and www/index.html.  The first will configure the name, version and rights that your app will need and the second defines the interface.  Use any editor you’re comfortable with.  And if you’re familiar with the github source code management then this can be useful later when you build your app.

You usually develop with the help of the PhoneGap Desktop App plus a phone-specific version of the PhoneGap Developer App.  The Desktop App reads your code and serves up a development version of your application; the specific Developer App for your phone will allow you to test your code.  As you make changes in your code the Desktop App will send the new version to your phone so that you can instantly see the change.

Up to this point, none of this yet requires any build keys from Apple, Microsoft or in the case of an Android app, your own self-signed certificate.  Since PhoneGap’s Developer App itself is signed, you’re good to go.

Test

The default set of files from PhoneGap comes pre-equipped with the Jasmine test suite built in.  Edit the www/spec/index.js file to modify the default tests, verify that the PhoneGap Desktop App is running and then execute them by bringing up the /spec folder for your application within the PhoneGap Developer App.

Build

When you’re ready to start seeing your application as a stand-alone app you can then build it on the PhoneGap Build website.

You have two choices for pushing your code to PhoneGap Build:  1) compress your files into a ZIP file and upload it or 2) use a github.com repository for your project and tell Adobe that location.

Since phone apps need to be digitally-signed to identify the developer it’s then necessary to upload to PhoneGap Build one or more keys for this purpose.  An iOS app will need an Apple Developer key, a Microsoft Phone app will need a Microsoft Developer key and finally, an Android app uses a self-signed certificate that you can create without Google’s knowledge/consent or paying them a fee (as is the case for the first two platforms).  The PhoneGap Build website provides enough guidance in this area.

Once built, the PhoneGap Build website provides one or more individual binary downloads on a per-platform basis as well as a QR Code scan image that you can use to direct your phone to fetch the appropriate one.