when did favicon get so needy?

Most of us probably end up working on favicon.ico as an afterthought, say, after you’ve finally shown the work to someone else and you see that the default icon is in use. Up to this point in the project you’ve likely considered this as work to do later.

For those who don’t know this already, this is the icon file which was displayed in Internet Explorer just to the left of the URL in the Location field. At one time it was only 16×16 pixels and enjoyed very little screen real estate, if you will. The file if present in your website’s root directory would be pulled by the browser. If you were an early website implementer like myself your discovery of the concept was when you were reading the website’s log files and kept seeing all the "404 - file not found" errors from newer browsers which now expected it. So you likely created one to clean up your log files and shrugged it off.

Platform Wars

Fast-forward to today and there are many popular browsers as well as platforms. Perhaps the biggest change has been the advent of touchscreen platforms that demand much larger buttons in which to invoke your shortcut. Bigger buttons means that our earlier one-size-fits-all approach no longer works; you can’t scale up a 16×16 icon graphic to the huge sizes required for a web TV platform.

Unfortunately, nobody came up with a consistent standard for this. What would have been nice would have been for everyone to populate a folder structure like this:

    /icons/favicon.ico
    /icons/icon_16x16.png
    /icons/icon_32x32.png
    /icons/icon_256x256.png
    /icons/touch_256x256.png

And done. In this perfect world these would be the only icons and tiles available for the sum total of all browsers, all platforms, all devices. Period. If an operating system or browser needed something different, if would be necessary for that system to read the closest available graphic and to then create something else from it as required. It should not be up to the website designer to create what is seemingly required today.

Microsoft Internet Explorer

Microsoft originally specified /favicon.ico to include one or more images. It could contain a 16×16, 32×32 and/or a 48×48 pixel version. Although the first size is good enough for the location field of the browser, if the end-user minimizes the browser then the 16×16 doesn’t seem big enough for the taskbar within Windows. Alternately, creating a shortcut to the website under some screen resolutions and settings requires an even bigger default icon, hence the three sizes.

Speaking of which, what format is a .ico file anyway? Even though Microsoft has a tool within Visual Studio to combine multiple files into an .ico file, you can actually get away with just renaming a .gif, .jpg or .png file to favicon.ico and it will work fine with most browsers.

Mobile Platforms

It sounds like these need .png icons, one or more Apple Touch Icons, Windows 8 tile icons and a /browserconfig.xml file.

Google TV

Google TV wants an icon that’s 96×96 pixels.

Android Chrome

Chrome wants an icon that’s 196×196 pixels.

Opera Coast

Coast wants an icon that’s 228×228 pixels.

Apple Touch Icon

The iPhone/iPad collection of devices want an icon which is between 57×57 to 180×180 in size. The newer the device or in the presence of a Retina-technology screen, the higher the resolution it will need.

The odd thing here is that non-Apple browsers and platforms sometimes use the Apple-named icons because they’re usually better resolution than the default one.

Microsoft Metro Interface on Windows 8 and 10

You’d think that the 150×150 tile graphic would need to have that resolution but Microsoft recommends 270×270 for this one. Er, what?

Web Apps

And now, just when you thought this couldn’t be any more complicated, /manifest.json might also be necessary and it serves a similar function as /browserconfig.xml.

Finding Some Sanity In All the Chaos

The current wisdom appears to involve providing two and only two files: a 16×16 pixel /favicon.ico and a 152×152 pixel /images/apple-touch-icon.png. You’d then reference the latter with a tag within your HTML. The .ico version can include multiple resolutions inside it—the more, the better.

As developers I think we should push back a little and to make platform designers accept this approach and to gracefully work if this is all that’s available.

why PowerShell sucks so badly

Here, I attempt to answer the rhetoric question, “Why does Microsoft PowerShell suck so badly?” Where to begin…? It has such promise, it’s clear that someone has spent much time coding everything. Ultimately, there appears to be power under that shell and it’s probably truthful to its name. But if you can’t use the tool in the real world, it should be renamed to Microsoft PowerlessShell.

“But if you can’t use the tool in the real world, it should be renamed to Microsoft PowerlessShell.”

It’s almost like a group of scientists in a desert setting somewhere—think “Manhattan Project”—created a collection of methods useful for annihilating the planet and then as almost an afterthought, enough preventive controls were placed upon its use that literally nobody could in fact blow anything up.

Today’s task is to automate the creation of a VPN button for Windows 10—based remote users here at the office. End-users then in theory can just double-click a PowerShell script that I’ve placed on a SharePoint server.  I would then individually share the link with them which would remotely install the new VPN profile. Sounds easy enough.  In fact, it sounds much easier than the two-page long tutorial in a Word document which attempts to educate them how to do all this manually.  Have you ever seen how long an L2TP shared key phase can be?  It’s pretty bad.  Just think of all the support calls I’m going to get if I can’t script this.

Is the PowerShell documentation easy to use? Hell no, it’s not. I’ve just spent a full hour trying to piece together the script required from this hobbled-together documentation on Add-VpnConnection. Does my script work under a test rig? I wish I knew, because at the moment I can’t actually run the script in any form or fashion because Microsoft doesn’t want me to.

“Does my script work under a test rig? I wish I knew, because at the moment I can’t actually run the script in any form or fashion because Microsoft doesn’t want me to.”

Now granted, I’m an Administrative user on my newly-upgraded Windows 10 laptop. The script fails with some terse error message which suggests that I need to run the PowerShell command as Administrator.  Well, that would foil things here in the real world because I’m trying to have the end-users run this script remotely so that I—the administrator—don’t have to be there in the first place.

So I doggedly trudge ahead and end my session and open up PowerShell by right-mouse clicking it and choosing Run As Administrator.  And yet, this still doesn’t work.  This time it fails with another terse error message which suggests that Set-ExecutionPolicy might help.  I then research this to find that “Unrestricted” is the probable attribute but when attempting to run this, I get another terse error message suggesting that I can’t change the policy.  Seriously?

I could now go back to my earlier research and re-learn how to digitally sign a script so that I can run it.  But the process to create and to troubleshoot a script usually requires multiple iterations before the script works perfectly.  And this is especially true since nobody yet on the Internet has provided a good example for creating a VPN tunnel to a SonicWall over L2TP/Ipsec with a pre-shared secret and authenticating to the firewall instead of the domain controller.  Designing a script like this takes trial and error.  Adding a signing phase between each script attempt effectively means:  I’m not going to do this.

“Adding a signing phase between each script attempt effectively means:  I’m not going to do this.”

In short, this is why Microsoft PowerShell sucks.  If you have to sign scripts just to run them while testing then it’s not worth the effort.  Why not include a button in the PowerShell IDE which allows me to “Sign & Execute” my script attempt?  And if I don’t have a digital certificate then open a dialog box to gather the information to magically make this happen.  Or even better, just allow me to create and run scripts without all the nonsense.  How about a big toggle that says “Unsafe Mode” versus “Safe Mode”?

unseen complications

Don’t you hate the unseen complications that rear their ugly heads somewhere down the line?  Today’s drama involved the inclusion of a very cool fullscreen api by Vincent Guillou.  Of course, it worked great in development and then failed silently on production.  Here is an overview of what makes my production site a little different.

Production Site Overview

GoDaddy domain name hosting with option “forwarding with masking” —> Firebase.com—based hosting site

GoDaddy does this technically by serving up a single HTML page which simply frames the remote content.  By its nature, it uses the HTTP protocol and cannot use HTTPS. In its configuration it allows you to set either HTTP/HTTPS for the framed content, however.  You’d think that you would have plenty of room to make something work. And it did work just fine up until the latest add to the project:  a button which allows the browser to go full screen and back again.

Unfortunately, the first push to production then failed silently. The button was there but didn’t seem to work. Entering the development area of the browser I saw that the browser had to block the content because the framing page was HTTP and the framed content was HTTPS and this isn’t allowed.

Okay, so I thought I could then adjust GoDaddy’s settings so that I could frame the content as HTTP to match the parent document. Unfortunately I see that Firebase always uses HTTPS and does not support HTTP. Since I can’t mis-match the content and combined with the fact that I can’t easily promote GoDaddy to HTTPS on the framing page or demote Firebase on the framed page, I was screwed.

To make a long story short, I either had to pay for hosting at Firebase (which allows you to bind your domain name to their hosting server) or I could abandon the cool  feature. Since I’m trying to highlight the cool new features of the latest browsers I decided that’s it’s better to just pay Firebase in this case.

It was still a bit technical in getting all this to work since Firebase only bound a single entity (www) to the website using my domain. This means that if someone just puts in my domain name only then they’re stuck at a GoDaddy-parked page. To work around this problem, I set up a redirect to deal with this situation. This time it looked like:  myJS.io -> http://www.myJS.io. Problem solved.

So now, the cool new feature is working on production and the implementation is slightly simpler, not that anyone else would necessarily know.

does this platform make my app look fat?

I just recently brought down some development code for a website I’m working on. Little did I know just how very large this thing would be until I just attempted to back it up. Granted, sometimes it’s good to have a full-featured set of code in a turnkey system to develop to. But unless we know how to use the many included pieces of code within that system, is it really worth it?

My brand-new Polymer Starter Kit website has 59,604 files in it and it weighs in at about 270MB.

Now, keep in mind that the /dist folder under the working project directory only has 120 files in it. Those are the ones that get pushed to the production website. That’s a mere 0.2% of what’s in this project. In fact, let’s do a breakdown by subdirectory then to see what’s using up all the space.

  1. node_modules – 95.2%
  2. app – 4.6%
    1. bower_components – 4.47%
    2. elements – 0.015%
    3. images – 0.04%
    4. scripts – 0.007%
    5. styles – 0.007%
    6. test – 0.008%
    7. / – 0.01%
  3. dist – 0.2%
  4. docs – 0.02%
  5. tasks – 0.003%
  6. / – 0.02%

I’d suggest that the code and images I’ve marked above in green would be the part of the website which makes it unique in content. And the code that I’ve marked above in red would be something which I’m really not sure what it is. Is it code? Is it bloat?

The main index.html file is only slightly longer than 300 lines. Over a hundred of those are comments or blank lines. On the one side, I could suggest that Polymer allows you to do much with only a few lines of code. And yet, I’m left with literally a ton of code like the lower portion of an iceberg lurking below the surface.

“And yet, I’m left with literally a ton of code like the lower portion of an iceberg lurking below the surface.”

As I wrote before with New-Tool Debt earlier in the blog, I really have no hope of ever knowing what’s in this project. There just isn’t enough time to research all this before I’ll be asked to do something else.

Reviewing node_modules, I see that twenty of the modules begin with the word “gulp”. Suggestion to Google: combine all these together into an uber-module and come up with a catchy name of some sort…

SuperBigGulp

 

use your domain name not theirs

There are times when you want to use a service provider like Gmail, WordPress, Firebase, Heroku or OpenShift but you don’t necessarily want to keep advertising their domain name with your business, blog or website.

Converting me@gmail.com to me@MyDomain.com

This one is easy enough, assuming that you know your way around your domain registrar’s configuration.  I usually park things at GoDaddy these days so I’ll use them as an example. Likewise, I’ll assume that you have a free mailbox at Google’s Gmail.

Assumptions:

  • Mailbox: me@gmail.com
  • Domain: MyDomain.com
  • Registrar: GoDaddy

Instructions:

  1. Write down the collection of email entities that you would like to forward to your mailbox
  2. Log into GoDaddy, visit the Manage My Domains page
  3. Choose the Manage Email link associated with your domain
  4. If you haven’t already, setup email forward for your domain
  5. Choose the Create Forward link
  6. Type in the first email entity from step one above, for example, support
  7. When you type the @ symbol you next get to select MyDomain.com
  8. In the next field, enter your mailbox name of me@gmail.com
  9. In the next field, choose the Free email forwarding with domain: MyDomain.com
  10. Click the Create button
  11. Repeat for each of the entities you’d like to be:  support@MyDomain.com, info@MyDomain.com, MyName@MyDomain.com, etc

It’s best at this point to wait a couple of minutes and then send a test email out to one of these entities to see if it arrives into your mailbox.  Once you’ve verified that it works you may begin to use it confidently.

I routinely create multiple mailboxes for notification apps so that they can have their own email queue.  Again, email forwarding hides the Gmail mailbox name.

Converting wordpress.com/Me to blog.MyDomain.com

Again, this is easy enough using a feature called forwarding with masking.

Assumptions:

  • Blog: wordpress.com/me
  • Domain: MyDomain.com
  • Registrar: GoDaddy

Instructions:

  1. Log into GoDaddy, visit the Manage My Domains page
  2. Choose the gear icon associated with your domain and then choose thee Manage DNS link
  3. Choose the Settings tab
  4. Under Forward -> Subdomain choose the Manage link
  5. Click the Add Subdomain Forwarding button
  6. Enter blog as the subdomain
  7. Select http as the protocol
  8. Enter wordpress.com/me in the next field
  9. Select 301 as the Redirect Type
  10. Under forward settings choose Forward with Masking
  11. Click the Add button

Give it a couple of minutes before giving this a try to see if it works.

The same technique works for your website.  For example, I’m hosting a website at Firebase.com, another at Heroku, yet another at OpenShift.com, etc.  Each of these hosting providers would probably love it if I allowed the world to see their domain name in the URL.  But I’d rather not since that’s free advertising to them.  Don’t these examples look better?

happy-pretty-8464.firebase.com -> MyDomain.com

myphpapp-mydomain.rhcloud.com -> MyCoolApp.MyDomain.com

myappmydomain.herokuapp.com -> MyApp.MyDomain.com

Honestly, domain names cost you year after year.  You might as well take advantage of the many free services which are included with your domain registration.

cross-origin resource sharing

I keep finding myself going over to the Enable CORS website to copy/paste their example code into my server-side.  They’ve saved me more than once.

Yet again today I was momentarily flummoxed over some seemingly-correct Javascript code in a PhoneGap project to fetch a json response from the server.

 $.getJSON(strURL, function(jsonData) {
  // Do nothing at this level but wait for the response
  }).done(function(jsonData) {
    // Do something upon success
  }).fail(function(XMLHttpRequest, textStatus, e) {
    $('#homeStatusParagraph').html('Lookup failed,
      status=[' + textStatus + '], error=[' + e + ']');
  }).always(function(jsonData) {
    // Do nothing here
 });

Interestingly, the fail() code section ran instead of the expected done() section.  At this point I then made the call manually to the server address represented in the strURL variable and it returned exactly what I thought it would, a json-formatted document.

The status returned from getJSON() was simply error and the returned e object was empty, not very useful for troubleshooting this.  What’s actually going on is that the client-side browser is blocking the inclusion of json fetched from another computer, presumably for security reasons.

Fortunately I’ve dealt with this before and inserted the code in green below to my Node.js server’s app.js file.

app.use(passport.initialize());
app.use(passport.session());
app.use(function(req, res, next) {
  res.header("Access-Control-Allow-Origin", "*");
  res.header("Access-Control-Allow-Headers",
    "Origin, X-Requested-With, Content-Type, Accept");
  next();
});
var routes = require("./routes/index");

This immediately fixed the problem and getJSON() on the client now happily worked, parsing the response from the server.

timing is everything

I discovered something strange and bothersome today when I began a debugging session on a Node.js—based website I was working on.  My logs were clearly showing that my browser was pre-caching content from a site before I visited it.

PrematureSafari_640x400

As you can see from the console.log() content on the right and a new Safari session, the browser is already fetching content in realtime as I type in “http:” in the address field.  Note that I haven’t pressed Enter or clicked anything yet to initiate the download.  Also note that the indicated site that’s been matched in the URL isn’t the site I’m debugging.  The site whose log is on the right is served on a different port.  So the assumption here is that Safari on startup tries to make things seem faster by pre-caching content from what it thinks you’ll need later.

“…Safari on startup tries to make things seem faster by pre-caching content from what it thinks you’ll need later.”

I already had a sense of this on my iOS device since it’s painfully slow when you switch back to the Safari app and hope to quickly lookup something.  No joy.  The iOS Safari app seemingly takes forever because of this startup pre-caching phase, it really grinds its gears trying to reload content from all the tabs that were open before, creating an unresponsive experience for the user.

What does this mean to the software developer?  Normally, we expect to follow a link and then the browser does our bidding. We next expect to review the logs in realtime and watch the performance for a session.  In some workflows I might startup the browser, edit a page, save that page and then visit the link to review the work I’ve just done.  But here we see that the browser anticipated that I might re-visit the site and has begun caching the content in case I might then ask for it.  For this scenario it was my intention though to see what it looks like including my latest edits and that’s not necessarily what I’d be viewing—I could very well be seeing the site from before my edits because my browser was trying to provide a responsive experience for me.

Looking for a workaround within Safari’s preferences, there doesn’t immediately seem to be a feature to turn off pre-caching of content.  This thread however appears to be addressing the concern and the following preference setting may be toggled off.

SafariPreferences_640x225