the tyranny of lint

I’ve found what appears to be an awesome bit of WebGL code for my website. I honestly spent only an hour tweaking the demo code so that it looks amazing. It’s got 3D movement, perspective, animation and very realistic-looking physics. The only major problem now is shimming it into my Polymer Elements website, somehow getting all this past Lint‘s watchful eye.

Lint

JavaScript Lint is a tool for checking your code, presumably from poor coding practices. I suppose that’s all well and good for the code I write but what happens when I then want to simply bring in someone else’s and their code has never been seen by Lint before? Well, the results are fairly dramatic for me, the person trying to bring in this code. Since the other coder doesn’t use Lint, the burden falls squarely on my shoulders to now laboriously go through their code just to make Lint happy. And let’s face it, Lint is an annoying little bitch when you get right down to it.

Why Lint?

You might be wondering why I might have added this to my project in the first place. I suppose the honest answer would be: I didn’t. The gulpfile.js in this project builds everything so that only a fraction of the code is then published to the website. I suppose this is “best practices” if you’re a big company like Google and you want to obfuscate your HTML source as much as possible. And Lint is just part of that build process which I inherited by using the Polymer template code.

If you’ve never seen a gulpfile.js and have never used gulp before you might try getting your feet wet by trying a project using Polymer Elements by Google. You can process the build by simply running gulp or something more sophisticated like building and then serving up your website locally with gulp serve, for example.

Held Hostage

I’ve just spent two long sprints of coding work trying to get three files past Lint.

The first is a JavaScript file called Sylvester. It’s a vector/matrix library and it looks like it’s very useful. It’s a dependency of the demo I brought in. This single file has taken two hours alone trying to get it past Lint’s complaints. The file is perhaps 1500 lines long. Honestly, I can’t say that it’s buggy. It’s just that Lint expects you to be an anal-retentive coder. For example, if you have a single variable which is in camelCase style, then ALL variables in the file must be the same style. This is one of the myriad reasons why Lint will fail your build process.

The next file is glUtils.js and  appears to be something to augment the Sylvester library. And I just spent three hours working with Lint trying to get this one to stop complaining.

And the last file is WebGL.js and appears to be the work of the demo author himself. I’ve just spent over four hours editing this one to make Lint happy.

Nine hours hacking away at other people’s code and I’m still not finished yet. Lint is still complaining about a variety of things and I haven’t even actually added the code to my project other than dropping the files into the scripts folder.

What Now?

I’m left with a decision to make.

  1. Do I abandon Lint’s review of these three files by filtering them from the appropriate section of the gulpfile.js job?
  2. Do I research Lint to find out how to tell it to ignore the offending section(s) of code so that it thinks it’s happy?
  3. Do I abandon the demo code idea completely and not bring it into my project?
  4. Do I abandon the Lint step completely from the build process itself?

I don’t know at this point. I do know that I’m tired of doing this, though.

Wishes

What would be nice is if popular editors out there could automatically review JavaScript while you’re typing it. In theory, then this might minimize the work for other people who wish to use your code.

Secondly, what would be nice would be if github had a built-in checker which could indicate a Lint rating, say, for anything in the repository. Or possibly, it might be a flag style of attribute. Regardless, if you were considering bringing in some code you could have an early warning that you might spend literally days having to update it.

Advertisements

database app, no server-side

This is new for me. As a long-time website developer I consider myself a hardcore backend developer. For years I’ve contracted out as the guy you’d go to for the database design and subsequent server-side code to access that database. And now I find myself working on a website with a slick-looking frontend and—gasp!—no server-side coding at all.

“How is this even possible?” you ask. Even a week ago, I’d have been just as confused as you may be now.

Firebase

Fortunately, there’s a platform called Firebase which actually allows you to write a database application with no server-side code whatsoever.

Here’s a list of things you’d normally need backend code to do on behalf of activities initiated by client code (on both your users’ and your admins’ browsers):

  1. Authentication, password maintenancerights control and logged-in state management
  2. Creating database records or objects
  3. Reading from database records or objects
  4. Updating database records or objects
  5. Deleting database records or objects

It turns out that you can configure Firebase to use email/password authentication and as a result of this decision you can do your entire site design without necessarily writing any server code.

As an added benefit you then don’t have to find a hosting provider for that server-side code either. And since Firebase allows you to serve up your static HTML website then this is appears to be a win-win.

Changing your perspective

Server-centric

In other systems like Node.js, e.g., you write your application from a server-centric perspective. You might begin by creating something which listens to a particular port, sets up a router for which pages are delivered and then you setup handlers for when a page is requested or when form data is submitted to a page. Lastly, you might then write some separate templates which then are rendered to the client when a page is requested. The design approach is very much: server-side first, client-side second.

Client-centric

Firebase appears to be turning things completely around. In this case you might begin with the page design itself using something new like Google’s Polymer framework. You would focus a lot of attention on how great that design looks. But then at some point, you need to register a new account and then authenticate and this is where you’d code it from client-side JavaScript. Here, the design approach is: client look-and-feel first, client JavaScript to authenticate second.

Rendering static plus dynamic content

In the past we might have rendered pages with server-side code, merging data in with a template of some kind, say, something written in Jade. In this new version we still might have a template but it’s just on the client now. Additionally, Polymer allows custom elements to be created. If you’ve ever written server-side code Polymer does allow you to bind data as you might expect.

Page routing

The Polymer framework includes a client-side routing mechanism so that you may serve up different pages from the same HTML document. But even if you don’t use this approach then Firebase‘s hosting provider will do that for you; just create separate pages and upload them and they’ll take care of the rest.

Why you might want this

Like me, you might have built up a level of comfort with earlier approaches. I myself often think about a website design from the server’s perspective. One downside to this approach is that you possibly could end up with a website design that looks like you spent 90% of your effort on the backend code and didn’t have enough time in your schedule to make things look really awesome for your users.

By beginning your design with the UI you are now forcing yourself to break out of those old habits. You work up something that looks great and only then do you begin the process of persisting data to the database server.

firebase

This now allows you to focus on how the application will look on many different devices, screen resolutions and whether or not those devices include a touchscreen and features such as GPS, re-orientation, etc.

Google and Firebase

All of this Firebase approach works rather well with the Polymer framework and I’m sure this is the intent. In fact, there seems to be a fair bit of collaboration going on between the two with Google suggesting that you host on Firebase from their own website.

Scalability

I think one big benefit to no server-side is that there is no server-side app to scale up. The downside then is that you’ll likely have to upgrade your hosting plan with Firebase at that point and the pricing may or may not be as attractive as other platforms like Node.js on Heroku, e.g.

Custom domain

Of course, you have to pay $5/month minimally to bind your custom domain name to your free instance. I wouldn’t call that expensive necessarily unless this is just a development site for you. In this case, feel free to use the issued instance name for your design site. At this $60/year level you get 1GB of storage which is likely enough for most projects.

Pricing note

Firebase‘s pricing page mentions that if you exceed your plan’s storage and transfer limits then you will be charged for those overages. Obviously, for the free plan you haven’t entered your credit card information yet so they would instead do something in the way of a denial-of-service at that point. If you have opted for that minimum pricing tier please note that this could incur additional charges if you’ve poorly-sized your pricing tier.

Overall thoughts

So far, I think I like this. Google and Firebase may have a good approach to the future of app development. By removing the server you’ve saved the website designer a fair bit of work. By removing the client-side mobile app for smartphones then you’ve removed the necessity to digitally-sign your code with your iOS/Microsoft/Android developer certificates nor to purchase and maintain them.

All of this appears to target the very latest browser versions out there, the ones which support the very cool, new parallax scrolling effects, to name one new feature. The following illustration demonstrates how different parts of your content scroll at different rates as your end-user navigates down the page.

parallaxEffect

Since parallax scrolling is now “the new, new thing” of website design I’d suggest that this client-centric approach with Polymer and Firebase is worth taking a look.

does this platform make my app look fat?

I just recently brought down some development code for a website I’m working on. Little did I know just how very large this thing would be until I just attempted to back it up. Granted, sometimes it’s good to have a full-featured set of code in a turnkey system to develop to. But unless we know how to use the many included pieces of code within that system, is it really worth it?

My brand-new Polymer Starter Kit website has 59,604 files in it and it weighs in at about 270MB.

Now, keep in mind that the /dist folder under the working project directory only has 120 files in it. Those are the ones that get pushed to the production website. That’s a mere 0.2% of what’s in this project. In fact, let’s do a breakdown by subdirectory then to see what’s using up all the space.

  1. node_modules – 95.2%
  2. app – 4.6%
    1. bower_components – 4.47%
    2. elements – 0.015%
    3. images – 0.04%
    4. scripts – 0.007%
    5. styles – 0.007%
    6. test – 0.008%
    7. / – 0.01%
  3. dist – 0.2%
  4. docs – 0.02%
  5. tasks – 0.003%
  6. / – 0.02%

I’d suggest that the code and images I’ve marked above in green would be the part of the website which makes it unique in content. And the code that I’ve marked above in red would be something which I’m really not sure what it is. Is it code? Is it bloat?

The main index.html file is only slightly longer than 300 lines. Over a hundred of those are comments or blank lines. On the one side, I could suggest that Polymer allows you to do much with only a few lines of code. And yet, I’m left with literally a ton of code like the lower portion of an iceberg lurking below the surface.

“And yet, I’m left with literally a ton of code like the lower portion of an iceberg lurking below the surface.”

As I wrote before with New-Tool Debt earlier in the blog, I really have no hope of ever knowing what’s in this project. There just isn’t enough time to research all this before I’ll be asked to do something else.

Reviewing node_modules, I see that twenty of the modules begin with the word “gulp”. Suggestion to Google: combine all these together into an uber-module and come up with a catchy name of some sort…

SuperBigGulp

 

new-tool debt, part 2

In my last post I described something called new-tool debt which you incur each time you add someone else’s module to your project; you now owe a debt of research into this new tool to determine how it works.  I continue here to describe everything I’m learning about the behind-the-scenes code within Google’s Polymer Starter Kit.

The Development Cycle

A typical development cycle then looks like the following collection of commands.

$ gulp serve

This will run a gulp session using the gulpfile.js file to rebuild the code in your app folder into a temporary folder and then to serve it up via your browser using http://localhost:5000/ as the URL.

$ vi app/index.html

I don’t actually use the vi text editor but you get the gist.  With each save of the file gulp knows that it should rebuild and then refresh your browser session.  You continue with this edit + save + review behavior in this fashion.

The Production Push

I like to mark my code with git before pushing to production and it’s a good habit to follow.

$ git status
$ git add app/index.html
$ git commit -m "Edited to cleanup home page"

The first command will show you the status of what file(s) have changed.  It would have reported that I’d edited the home page.  The second command then checks the home page into source control and the third labels that collection.

Now we’re ready to rebuild, upload to production and review the results.

$ gulp

This command then rebuilds the code in your app folder to the dist folder.

$ firebase deploy

This command uploads your code from the dist folder to your Firebase website location.

$ firebase open

This command will open a browser so that you may review your production website.

Troubleshooting

You may find that you need to order your edits so that gulp serve is happy.  The gulp session can crash if you attempt to reference, say, a custom element in your home page before you’ve saved your edits in the file(s) which create and reference your new element. In this case, save all the files in question and then run gulp serve again.

Testing

Google has included a test platform within the code they’ve distributed.  They call it the Elements Tests Runner and it’s yet another thing that’s part of your new-tool debt.  So let’s try to see how you use it.

$ gulp serve

As usual, have gulp rebuild and serve up your website.  In your browser you’ll need to manually visit the http://localhost:5000/test/index.html URL by editing your browser’s location.  At the moment, my test produces the following page when it’s completed.

PolymerTest_684x608
Output of /test/index.html

For reasons which still remain a mystery to me, Google decided to run each custom element test twice.  This will explain what appears to be a doubling of tests.

Under-the-hood when I review the app/test/index.html file it’s pulling in a bower_components/web-component-tester.js file and then calling the WCT.loadSuites() method to list which tests it wants to run.

WCT.loadSuites([
  'my-blog-list-basic.html',
  'my-blog-list-basic.html?dom=shadow',
  'my-project-list-basic.html',
  'my-project-list-basic.html?dom=shadow',
  'past-projects-basic.html','past-projects-basic.html?dom=shadow'
]);

So we can see why each test ran twice.  There appears to be some argument that’s been selected in each pair which wants a shadow DOM, whatever that is.  This page seems to know what this is about.  I guess I’ll add a bookmark for myself and research that later to find out how it impacts my testing.

Since I’ve been radically modifying my Polymer Starter Kit to suit my own needs, I quickly broke the initial sets of tests.  I immediately dropped the greeting custom element they included, so this broke the included test.  I found it was then necessary to alter the tests being run and to even create my own custom elements and write tests for them.  And yet even though there is some coverage for my new custom elements, I get the feeling that I have only scratched the surface for what my tests should include to adequately test the boundaries.  And I don’t really know this test platform yet, to be honest.  Again, I’m confronted with my own new-tool debt for this project which I owe it before I should consider it a “safe” project.

I’d say, it’s not bad for a weekend’s worth of work but you can see the production site for yourself to see what it looks like.

In my next post, I’ll review an edit I had to make in the source code for the web-component-tester bower element.  I discovered an outstanding issue that hasn’t yet been fixed on their github site.  By altering my local version I was able to get the test suite to report the correct completion percentage.

 

new-tool debt

Here, I’m defining a new term for the learning debt you incur when you use a new tool within your project space.  It’s the accumulation of time you’ll spend to fully understand the new tool, what it does and doesn’t do, to know when it breaks, to recognize the new error messages it throws and to learn how to “pop the hood” and investigate further when there is trouble.

“New-tool debt” is the total time you’ll need to spend because you’ve added a new tool to your project.

Don’t get me wrong, I like many of the tools available in the world of open source.

Polymer Starter Kit

I’ve decided to use Google’s new Polymer collection of code and I find myself both excited and yet daunted at the same time.  Within moments, I have a template that’s generated lots of code for me and all within a new framework.  And yet now I have new tools that are unfamiliar to me.  I’ll walk you through the list.

Npm

Easy enough, the installation calls for the Node Package Manager but I’m already familiar with this program and of course, it’s installed already.

$ npm install

From my experience this tells npm to read the application folder’s package.json file and to update the node_modules folder with the dependencies required.

Bower

The installation of the kit requires that I first install Bower.

$ npm install -g bower

This command would install Bower globally on my system.

$ bower install

Presumably, this runs Bower in my application’s folder and uses the bower.json file in a way that’s similar to npm.  The file seems to look like package.json.

Git

Again, I’m familiar with git already as a great source control management software.  Note that they indicated that this is an optional step which I should have paid more attention to.  It means that in a later step, the hosting site won’t pull my code from some repository nor will it pay attention to the commit status of my individual files.  This is more important than you might think since what is pushed to production should be what should be pushed to production.  This isn’t the case for what comes next in their demonstration.  Carrying on, though…

$ git init

This command would create a hidden .git folder and initialize it from what it sees in the current directory.

$ git add . && git commit -m "Add Polymer Starter Kit."

Here, I would be adding all files into the git system to manage them and then committing them for the current version of the project.

Gulp

The installation also calls for Gulp to be added to my system.

$ npm install -g gulp

This command would install Gulp globally on my system.

$ gulp

What seems to be happening here is that the contents of gulpfile.js is read in by this command and then it does a number of things to collect files and to process them into a subfolder called dist under my application folder.  The term they’re using is to build the source.  That dist folder will be used again in a moment.

$ gulp serve

This command appears to be starting up a webserver on the localhost IP address and serving up the pages in my application folder.  Note that this will show updates to individual source files like app/index.html without first running another gulp command to build them.  In other words, it’s not serving up the pages in the dist folder but rather the files in my application folder.

Firebase

Here in the demo Google is suggesting that I sign up for a Firebase account and push the code to this free hosting site.  I then visited the website and created an account where they generated a new virtual host and URL for me.  I then needed the command line interface for Firebase.

$ npm install -g firebase-tools

This would install the Firebase command line interface globally on my system.

$ firebase init

This command is supposed to ask some questions in order to create a firebase.json file in my application folder.  Unfortunately Google’s instructions were lacking and it’s first necessary to run the following command:

$ firebase login

This command will direct you to the Firebase website to create a login token for use with the other commands.  Having run this successfully, I was then able to run that firebase init again.  It allowed me to identify the previously-created virtual host at Firebase and it was then necessary for me to identify my application’s dist folder as the source.

$ firebase deploy

This command will then take the contents of your dist folder and push it to your virtual host on Firebase to deploy to production.  Note that this isn’t the way other hosting sites work, e.g. Heroku, which require the code to be pushed to a github repository first; the publishing event then pulls from your github repository.  Personally, I prefer the Heroku version since it means that your code should be committed in git and pushed first to the origin repository before may be deployed to production.  The Firebase version means that un-committed code can be built and deployed.  To me this doesn’t seem very controlled.  Returning to the Firebase commands…

$ firebase open

This convenience command will open a browser window to your remote virtual host at Firebase so that you can see your newly-deployed website.

New-Tool Debt

And now here is where the new tool started to rear its ugly head.  Google then asked me in the demonstration to modify the pages just slightly so that another menu item showed up.  This worked flawlessly while testing locally using the gulp serve command plus edits to two files.

No longer within the scope of the their Getting Started information, I thought I’d then push these new changes to Firebase.  This is when I began to have problems.  And since some of these are new tools for me I felt out of my depth in handling those problems.  In order to push the files from the dist folder it would be necessary to build them again:

$ gulp

And yet, gulp wasn’t happy.  After those slight edits to both app/index.html and app/elements/routing.html the code in gulpfile.js was now failing.  I reviewed the coded I’d entered in (by copying/pasting from their own website’s code) but I’d entered things flawlessly.  The problem was that Google’s own people hadn’t run gulp to re-compile their own code and here I was now troubleshooting their own demo.  Not fun.

At this point I found it easier to abandon changes on both of the edited files.  Since I’d initialized git in the folder, this was then an option for me.

What I found frustrating is that gulp was identifying a problem with a file that simply didn’t exist within the project.  Maybe this is normal for gulp but I now have the burden of of having to research what gulp does and how it fails in order to understand all this better. I’d like to use the new Polymer platform but this gulp failure is really just a wake-up call that I have new-tool debt that I haven’t paid off yet.

“…this gulp failure is really just a wake-up call that I have new-tool debt that I haven’t paid off yet.”

So be careful with what you bring into your project.  There may be a point in the near future where things are breaking for no apparent reason and others are asking you why.  The speed of your answer and your troubleshooting depends upon whether or not you’ve paid that new-tool debt with research.