Thursday, December 18, 2014

Google wants to warn you every time you use HTTP instead of HTTPS

So recently, Chrome developers have been floating the idea that the UI should post a security alert every time the browser visits a page that isn't encrypted. According to the BBC, currently only 33% of  websites use HTTPS (encryption). I suspect in actual practice the number of websites that are still using unencrypted web connections is much higher. This would mean you'd be getting many security alerts in practice.

I am all for more encryption. There are far too many parties out there who have something to gain by snooping your connections. Every time I use an strange WI-FI hotspot I worry about who is listening or how they might modify my data.

Many think this is all theoretical. That no one really cares about your data so unless it's something like banking data then it doesn't matter. Nonsense. Dangerously so. You're not up against humans you're up against software and with software you're never too small to not matter.

Consider that the WI-FI hotspot might be inserting ads into web pages you're looking at. Comcast has been caught doing this. This is annoying and potentially misleading because now you can spam ads and the user will think it's coming from whatever website you're using. Hopefully they didn't make a mistake or the page won't show up at all. What about replacing existing ads with your own? Too bad for the original web site trying to make a living. What about inserting a tracking ID so you can be followed everywhere you go?

What is the ISP doesn't think you should be watching youtube?

And these are the corporations. Nasty people on the internet can  snoop on everything that goes over an unencrypted connection. Much of it can be used to fool support and steal domain names or accounts because why not? .. to say nothing of identity theft. How much of yourself are you giving away each time you log into facebook?

Then there's the government. Whether you're liberal or conservative you can bet there's someone who disagrees with something you're doing.

Many websites have encrypted versions of their site. However it can be painful to figure out which sites have an encrypted version and to manually switch over. This is where HTTPS Everywhere comes in.

HTTPS Everywhere is a Firefox, Chrome, and Opera extension that encrypts your communications with many major websites, making your browsing more secure. Encrypt the web: Install HTTPS Everywhere today.

HTTPS Everywhere is a browser extension that contains a database of web sites that have encrypted versions and automatically redirects you to the encrypted version of the site without you having to worry about it. This gives me some piece of mind when I'm using public WI-FI hotspots. It's not perfect but it's the best we can do until all connections on the internet are encrypted.

.. and they will be.

Wednesday, December 10, 2014

I Found a Good Headset

Those with long memories will remember that I have been looking for a good circumaural headset ever since my Plantronics 655 headset died.

The Plantronics 655 was never the perfect headset. Its ear cushions were too small and rested on your ears so that they would become uncomfortable after wearing them for a long time. Just about every headset has this problem. I was pleasantly surprised to find that the KOSS SB45 doesn't. Its ear cups are large enough to go all the way around the ears so I bought a pair. I am very pleased with them.

My only complaints are that they exert slightly more pressure on the sides of my head then I'd like and that they don't do whatever magic the Plantronics 655 headset does to let you hear yourself when you're on Skype.



Let me explain, you know how when you wear a headset you can't hear your own voice very well? The 655s play your own voice back to you so you can hear yourself. Since you can hear yourself you don't feel the need to shout. I'm actually surprised since I thought it was a feature of Skype but it works with the 655s and not the SB45s. It looks like some sort of device level feature. It's really useful and I miss it.

Apart from that the KOSS SB45 headset is very comfortable, has a good mic, good sound, inexpensive and I would recommend it.

Monday, December 8, 2014

Space Smilies now on Google Play store

Well, I've released Space Smilies to the Google Play Store now. Go download it! Have fun! Give feedback!

My plan for the sabbatical was to release two video games. The first was this one. I figured it would take about a week to get it ready for release. If I hadn't decided to change things that would have been a realistic estimate. Instead I decided to clean up the Space Smilies movement, add levels, add a level editor, redo the graphics and things like that. I figured with all that it would take a month. It took about 4 months. This plus a bunch of other demands on my time mean that I'll probably not get to do the game and game editor I wanted to.

Ah whatever.

I have other projects to work on. In fact, it's quite hard to set priorities. Part of the problem with deciding on what project to undertake is that it's not clean what's worthwhile unless your part of the conversation. That and Myster  set the bar for success really high. We would get 10000 downloads a day when we released a new version. Most days we'd only get 300 downloads. That's still impressive. It would be even more impressive if I hadn't made a bunch of newbie errors early on in my installers that made most users simply not able to use the application on Windows. Painful doh!

Every field has a conversation. If you're a Starcraft player you can think of it as the current state of the meta game. It consists of what is known, what is done, what needs to be done and what is not worth doing. Actually, it's more complex than that includes all the little arguments that are in progress and all the relationships, camps and tribes that are squabbling at the moment. I used to be very connected to these things but some of the conversations have moved in 10 years.

Games and indy gaming especially. The tools available to modern indy game developers are impressive. Part of me is saddened by the fact that application development frameworks are no where near as good.



This means that creating a game is much more about learning the tools than learning exotic programing techniques. I'm not sure I want to bother to learn a tool whose sole purpose it to quickly make top scrolling video games. I would love to WRITE such a tool. In fact, that was kind of the idea, but it looks like I am 5 years too late there.

Doh. That's what happened when you don't pay attention.

Oh well, I'll figure out something. Stay tuned. :-)

Wednesday, November 26, 2014

Android API First Impressions

I've recently finished my game and have started porting some of the Android only sections back to JavaSE. I have been struck by how everything seems so much easier with the Android API.

I find this very surprisingly. I've been using Swing for a very long time and have gotten good at doing crazy things with it. Even with my knowledge of Swing tricks and hacks it's still easier to do things on Android than Swing.

Take, for example, layouts. I've gotten to the point with Swing where I just use GridBagLayout from the start. I've got my own utilities that makes using GridBagLayout much less painful than it ordinarily would be. Gridbag is surprisingly flexible, if you've managed to survive its brutal learning curve. That said, doing layout with Android's isn't too bad. I certainly didn't have the parade of WTF moments that I experienced trying to wrap my head around Swing's layout system. Android also has the advantage of coming with a GUI layout editor. It means you don't find yourself blindly changing values and recompiling/relaunching every time to see if the changes did anything.

Then there are things like how do you make all the widgets translucent with an animated background? With Swing, I had to use the arcane knowledge I've discovered in my 10 years or so working with it. With Android it's just a property on the view or layout. Golly, that's convenient.

What about have multiple layouts on the screen at the same time and showing/hiding them like cards? With Android it's that way by default. With Swing it's a weird system of content panel layers or you can use CardLayout. Either way it's which is full of the usual Swing WTF moments: I have to do that to show a "card"? JLayeredPane is where?

Events are handled nicely too but to be honest, I was hoping for more here. Swing's event system is actually not too bad. Android's is very similar but also offers additional flexibility: you can define your event handler method directly in the layout. This is cute but this won't scale well if you have a fairly complex Application. That pretty much sums up my feelings towards Android event handling: It's good and tries to make things really convenience but the convenience comes at the price of preferring and unscalable application architecture.

Well, I say that but Android's encourages an application architecture that segments large applications into multiple activities which would help with scalability a great deal. I don't know how these two conflicting factors work out in real applications but I'm certain that either way there's more than enough flexibility in the approaches you can take to make everything work.

So, in conclusion, my experience with Android's APIs has been quite positive so far. I'm looking forward to more.

Tuesday, November 18, 2014

Agile

Whenever I hear someone start to talk about Agile methodologies I start to worry because, while the industry has agreed that Agile is the way to go, Agile is often misunderstood.

Agile is often sold to management as a way of getting better quality software faster, with fewer bugs showing up in the field. Well, that might happen as a side effect but Agile is really about being flexible. Change is a big problem on any engineering project. With most engineering disciplines change is very often fatal but at least it's easy to understand why. If you design an engine for a car and it doesn't fit, you're basically screwed. This is where the saying "measure twice cut once" comes from. With software it's not as obvious sometimes why a change would be particularly difficult because software is just a bunch of instructions. Just change the instructions! Duh.

It's never that simple. Software isn't constrained by the laws of physics. As a result, software projects have a tendency to grow in complexity until they become unmanageable. A typical project is a tangle of inter dependencies. Some of these dependencies are design assumptions, some of them are organizational assumptions - like budgeting and estimates. For example, if you write a piece of software for a desktop computer and find out that it really needs to run on a smartphone, you're basically screwed. It's doesn't matter that it's software.

Agile methodologies are a series of mitigation you build into your software and organization to make it resilient to changes during development. Those changes can be discovered difficulties or they could be mistakes.


In order to get into the Agile mindset you must first be convinced that planning is pointless. That the world is too full of unknowns and surprises that trying to plan is like putting on a contact lens in the middle of a sandstorm.

Basically,

  1. Your time estimates are random numbers
  2. The man in charge of the requirements is a raving madman
  3. The chief architect has some kind of dementia
  4. What you're trying to build might be a logical impossibility anyway
So what does all this mean? It means you can't rely on estimates, the requirements still need to be discovered, you're going to make mistakes at the design phase and the whole thing might be a waste of time anyway.

In other words, it's a typical software project.

The only one thing you're not allowed to assume is that your programmers are idiots or are evil. If your programmers aren't excellent, trustworthy professionals then you're doomed no matter what you do. You might as well go outside and play Frisbee all day. You'll fail either ways but Frisbee is more fun.


Agile mitigation consist of things like this:
  • Chopping the project into many small pieces
  • Prioritizing these pieces with the goal of getting something useful quickly
  • Doing each piece one at a time; avoid over-design - YAGNI
  • Giving this "something useful" to the customer and finding out if you're on the right track as soon as possible
  • Re-evaluating the priorities of these pieces every day as new information is discovered during development
  • Pushing decisions to the edges of the org chart (developers or other) to allow developers to solve problems without a heavy vetting process (self organizing teams)
  • Lightweight, flexible and adaptable process - one that allows people to adapt to changes
  • Improve communication channels between people (co-location, burn down charts to track progress, bug database, unrestricted channels (anyone can talk to anyone else in the organization))
  • Techniques to write maintainable code. Unmaintainable code is by definition hard to change.

My key point is that Agile methodologies don't make change free. It doesn't make software development magically faster or higher quality. It's all about being flexible and mitigating the damage done by routine changes during software development.

Monday, November 17, 2014

Headsets and things

A little while ago I wrote a blog post about replacing my Plantronics 655 headset that had recently stopped working. Well, I spent a great deal of time searching for a good replacement and eventually came to the conclusion that there aren't any good headsets out there. Every single one of them has issues. Either the headset ear cups are not large enough, or the mike doesn't work very well or worse.

Razer Kraken USB isn't too bad. I didn't like the circular earpieces. They were too small (5cm) and the wrong shape. Ears aren't circular, not sure why they were with circular ear cups. Most of the reviews complained they fit funny.

The Steel Series headsets had terrible microphones. I like Steel Series as a company. My mouse is a steel series XAI but I would be embarrassed to use such a terrible microphone.

The worst, however, was the Sennheiser headset. Sennheiser has a fantastic reputation online. So good in fact, that when I found a pair of  PC333D G4ME going for cheap I bought them right then and there. They are normally out of the price range I would spend on a headset even with the sharp discount I got them for so I was hoping they would be amazing. Nope. Crushingly disappointed. Literally. They actually crushed my head with such force I couldn't wear them.

Have you even been back to an elementary as a grown man and tried to sit down at one of those tiny desks. That's pretty much what it felt like trying to put on the PC333D G4ME. I should point out I have never had anything close to this experience before. All headsets I have ever tried fit nicely on my head with plenty of room to spare. It's starting to make me wonder: do you have to grow up with this headset? Is it like artificial cranial deformation? You start off as a toddler playing games with this headset and over time your skull changes to fit the headset?



 Picture of typical Sennheiser customer skull


I had to send the PC333D G4ME back to the online retailer with a financial penalty so that has left me grumpy.

What surprised me is that it's very hard to find a headset with ear cups that are properly circumaural. All of them seem to have ear cups that are about a centimeter too small. Except for Plantronics whose ear cups are 2 cm to small. If you're wearing a headset all day they will press on your ears and become uncomfortable.

I was also surprised that many manufactures (some not explicitly mentioned here) ship their headsets with terrible microphones. If you're gaming online this tends to only annoy other people. However, if you're trying to do digital dictation, a good microphone is important.

The one manufacturer I haven't tried is Koss. This is partly because I can't figure out where Koss products are sold in Montreal and don't want to play the online ordering lotto again. The most comfortable headphones I own are Koss. Talking to their sales staff they might have a headset with large enough ear cups. I say might because no one at the company would commit to any measurements so I am still not sure.

At the moment I'm using a new pair of  Plantronics 655. They aren't the most comfortable headset ever but they were dirt cheap, they have a good microphone and they don't crush my skull.  I'll continue to use them until I can get a decent replacement.

Wednesday, November 12, 2014

Space Smilies (beta)

For the last 4 months or so, I've been writing a video game and it's now at the beta testing phase.

The game is called Space Smilies and is similar to space invaders but with a few differences that show themselves as the game progresses. It also includes a built in level editor so you can tweak your own levels simply by adjusting a few parameters.

The game is based on the old Space Smilies code that made an appearance on this very blog back in 2010. That code was in turn based on a video game I wrote in 1999 while studying at University so the code has had a long history.

The game is built for the Android platform, so if you have an Android phone (or Android compatible) you can try it out below:

Space Smilies for Android

Note, the link above is updated with new versions as they are built so if you want to get the latest build all you need to do is re-download it using the same link.

Monday, November 10, 2014

Nexus 5 and garbage collectors

I really like my Nexus 5. It's at the level of refinement where I would say everything works more or less correctly. This makes a change from my previous smart phone the Samsung Galaxy S. That was a terrible phone. The Galaxy S was too slow, didn't work as a phone owing to a terrible microphone, didn't work as a GPS, would flatten a battery in about 4 hours if you forgot to turn off the GPS or wi-fi and would crash fairly often too. I initially got the Galaxy S due to its reputation as a sort of landmark phone. I figure that means that phones before it were even worse somehow.

So, anyway, I like my Nexus 5. It works as a phone, GPS, keeps a charge and is stable. It also does a cute impression of a flashlight, has a decent camera and can make a sweet Mango Lassi. Well, everything except that last one. I feel like some kind of digital wizard carrying it around. Need a light? Boom! No problem. Need to know the weather? Boom! Weather radar! Need a map of Belgium? Well, that's random but I can get that for you too.

It's also been a good testbed for my application. I've learned quite a few things using that phone like....

Did you know that Android's garbage collector doesn't do compaction? For me this is a little like learning that Ferrari's new car is steam powered; it's a bit difficult to wrap my head around. I figure there some good, technical reason why they do this. I'd wager that they were trying to avoid garbage collector pauses. Google seems to obsessed with avoiding pauses or stuttering on Android (which they call "jank"). The thing is, if you don't have compaction you could run out of memory without.. umm.. running out of memory!

Compaction is the step that un-fragments memory. Wikipedia has good article on memory fragmentation but basically it's when memory gets filled with lots of little holes. Think of it like empty seats in a movie theater. If you arrive too late all the free seats are in singles or groups of two and are scattered all over the place. If you're in a group or four and want to sit together you can't because there aren't four seats together. Compaction is the step where you politely ask people to move around so all the free seats are in one big row - that way large chunks of memory can sit together. If large chunks of memory can't sit together you get an out of memory error even though there's technically enough free seats.

I think I mixed my memory-aphores there.. but you know what I mean.

Luckily the latest release of the Android operating system includes code that does compaction. Although they seemed to imply that it will only do it when you switch out of the application or something. I guess the garbage collector pauses are hard to notice when you've switched out of the application. Does this mean you'll need to switch out of the application you're using every once in a while to avoid running out of memory? I hope not. :-)

Friday, November 7, 2014

The last 10%

Well, that was a fun month or two. I've been working hard on my video game - getting it ready for release. It's so close to being completed I can smell it. To the extent that you can smell software, that it. The one major take away lesson I've learned so far is that software development is slow. Agonizingly slow.

I actually knew this already. I've been writing software for a good 15 years both on my own and in large teams, but it's just hit me again: software development takes forever.

Someone said, the first 90% of the product takes 90% of the time. The second 10% of the product also takes 90% of the time. This is pretty good summary.

The thing that trips you up is it's relatively quick to get something basic working. So if all you've ever done is a little scripting for yourself or in-house tools you're getting a warped perception. Once you want to release software commercially to actual users the quality needs to be much better. And I'm writing software in the consumer space - where no one cares that it's hard and there are no second chances; it has to be perfect first time.

For example, with this game I'm writing it took me about two weeks to get the program to work correctly when it's in the background. You would think this would be fairly simple. After all, it's unlikely you're ever run into an application on Android that's ever had issues when put into the background. Well, my game was crashing. After I fixed that crashing, the game would continue to play itself while in the background. After I fixed that the music would do the same. Then I had to release the graphics to free up memory to be a good Android citizen or risk being barred from Google Play store. Finally I had to correct an edge case where locking the phone would cause the application to reset.

It takes time to understand how things work on a new platform and how to design an architecture that works with the platform instead of fighting it. Throughout the process you can't help but think you're not adding anything to the product; I'm not adding levels or new characters or anything to the game. I'm just fixing stuff that should just work anyway. It also doesn't demo well: Look! Doing this doesn't crash! It did before?

 I don't mind though. In fact, it's what attracts me to the consumer space; because consumers have a choice. People aren't being saddled with your software because their organization chose it for them. They aren't being forced to use it because that's what their client uses. When someone uses your software it's because they want to. No IT department to help with the migration. No pressure to conform. You have to convince people to use it. In this sense it's a more honest type of software development. I've always had high expectations from the software I use and I've observed that, in aggregate, when they have a choice, human beings do too.

Friday, September 26, 2014

My Video Game's Leaderboard

So, for the last for days I've been working hard on my game and generally ignoring blog related activities. This has resulted in two things. The first is that there's been no blog posts but the second is that I can finally see the light at the end of the project. The game I'm writing is nearing completion and this make me happy.

I decided to stop working on the blog when I started working on the global leaderboard for my game. The idea is that instead of having a top ten list only for the local machine you have one giant one for the entire world.. Consequently it has to have more than ten entries in it. I figure the world must contain several dozen people so I should use a database for that sucker (I'm not actually sure of the world's population since I don't go outside anymore :(   ).

The problem with databases is that I've spent most of my time as a professional developers trying to avoid SQL. I don't like SQL, It reeks too much of command lines and the 1970s. My internal conceptually associations go something like silly hair, the colours orange and brown, tiny tennis rackets and SQL. SQL is an injection attack just waiting to happen. Seriously. You need to escape stuff properly or use an API with prepared statements. Otherwise the teenage equivalent of me is going to turns the database into a playground.

In order to hook up the leaderboard to my application I also needed to use Apache and PHP. I chose these two things because my ISP chose these things and was nice enough to let me use them.

In order to write my code without affecting production I needed to create a test environment. This proved to be extremely difficult partly because I'm on Windows and this is considered weird by the UNIX crowd who make all this stuff (also 1970s, BTW). But mostly because I was trying to match all the version numbers with the ones my ISP had installed. It's generally a good idea to test on the same software as you deploy on because it has a better chance of working. In this case that would probably require time travel as the ISP hasn't updated its software since dinosaurs roamed the earth.

Space: The go-to place for all good video games

In any case, I eventually got a decent test setup using fairly up-to-date versions of everything but it took just about a week to do. Normally I wouldn't mind too much since I'm being paid to do it, however in this case I was fully aware I was wasting my own time and could probably build my own server software quicker at this point (it never works out this way but it feels like it should).

Anyway, the general upshot is that my game now has a global (as well as local) leaderboard. Hopefully gamers will like that and compete for higher rankings. If they don't I just wasted a huge chunk of time.

Tuesday, September 9, 2014

Advanced Settings Blog

At some point someone noticed that there was a bunch of stuff in the setting panel. "This is too cluttered!", someone said, "Some of this stuff must be for power users! Put those settings in an 'advanced' section.".

Ah, but what is "advanced"?  This is, apparently, a hard question to answer because every thing I want to change seems to be in the "advanced" section.

I would suggest a few guidelines:
  1. "Advanced" is not a synonym for miscellaneous. Just because it's not used or doesn't fit into any other category doesn't mean it's "advanced".
  2. "Advanced" is not a synonym for rarely used. Sure, I rarely delete a password using the password manager but deletion of a remembered password isn't rocket science.
  3. "Advanced" does not mean hidden. Sometimes you need to get technical, that doesn't mean it has to be hidden like some kind of video game secret level. It's getting to the point I need to google the entire internet to get the Konami code to change a setting.
  4. If >90% of users don't know what it means and it can seriously screw things up by messing with it, then it's an advanced setting. Be sure to include a "reset to defaults" button somewhere. Maybe a help button too since you are allowed to try and help interested users understand things. You're just not allowed to assume that they'll read it.

Thursday, September 4, 2014

Headset replacement

So my Plantronics 655 headset recently stopped working. This is annoying but it does gives me a good excuse to get a better one. While I was impressed with the decent microphone and lack of any background hiss, the Plantronics headset had some issues. The biggest problem was that it became very uncomfortable over time.

Note to self: supra-aural headphones are not good for wearing long term.

For my headset, I've been looking at gaming gear because I've come to learn that, for some things, gamers are the most demanding. Best audio quality, best microphones, best comfort, rugged etc.. Gaming headsets need to do all these things.

I'm also determined to get a USB headset because I'm addicted to the quiet. To quote myself:
Usually a headset plugged into the stereo mini jack on the computer will create a tiny background hum or hiss all the time. This is typically because the audio card on the machine isn't perfectly isolated from all the electrical noise coming from inside the computer. Because this is a USB headset, however, there is none of that. It sounds as if the headphones are not plugged in, as if the computer is not playing any sound.

So, to re-cap, I'm looking for a circumaural set with good audio, good microphone (ideally one that doesn't pick up room noise), and is USB.

My current lead contender is the Razer Kraken USB. Razor has many headsets available. Most of the seem to be named the Kraken. Razer has a web page so you can wade through the morass of Kraken models. Want to know the difference between the Razer Kraken 7.1 (which is USB) and the Razer Kraken USB (which also does 7.1 audio!)? Heavy sigh. The Razer Kraken USB is relatively inexpensive too so I won't feel to bad next time I sit on them and they break.

There are many other contenders though and I'll be looking through the online reviews and checking them out. Until next time...

Wednesday, August 27, 2014

Scala course on Coursera by Martin Odersky

Martin Odersky, the creator of Scala, is doing an online course on Coursera starting September 15th of this year (2014). It's an advanced course ment for those who already know a programming language like Java or C# although knowing languages like C/C++, Python, Javascript or Ruby will also work. Since I am actually in the process of reading Martin Ordersky excellent Scala book, this works out nicely for me. The first edition of that book is available for online reading too.

Scala is an attempt to create a language for the JVM that is completely compatible with existing Java code but pushes further than Java. Scala adds things like lambdas and function programming concepts while trying to address criticisms of Java like its verbosity. Scala is a language that has been picking up steam recently and it seems to be where the cool JVM people are hanging out (assuming it's possible to be cool with Java).





Ok, one more ridiculous video: is that a Mac Plus? Pff, Java doesn't run on that.

Tuesday, August 26, 2014

final Keyword in Java

I like the "final" keyword in Java. However, I'd like it more if every reference were final by default and "muttable" was a the keyword to create a mutable a reference. This way around is better because if you see "mutable" in a reference declaration then you know that the author took the time to do that because they are mutating the reference. Mutation is the case you want to watch out for and discourage. Today, if a reference isn't marked as final you don't know if that means it's mutated or if the programmer isn't using final. In my experience mutability is the rarity and final is the common case.

Unlike C++, Java doesn't have a way of making both the reference and object constant. In java the declaration.

public class Foo {
    private final Date someDate = new Date();

// ....

Doesn't stop you from mutating the object like this:

    someDate.setTime(1234); 

It just stops you from changing the reference like this:

    someDate = new Date(); // not allowed, referenced is final

Some programmers use the final keyword to mean that not only shouldn't the reference change but the object shouldn't either. Restricting final in this way is not a good idea. Basically, that's not what final means and if you do that you're not helping anything.

I am generally against using language keywords like final to mean something other than what the compiler can assert from them. Marking a reference final doesn't guarantee that the object won't change. You might as well add a comment to the declaration like this:

public class Foo {
    // someDate is not mutated
    private final Date someDate = new Date();

// ....

Because it has about the same chance of either being honoured or going out of date.

Additionally, knowing that the reference is final (not the object) is still very useful on its own. This is why C++ has the ability to make both the object and reference constant independently. By using final to mean the object and reference shouldn't change you confuse maintainers with an idiosyncratic style and destroy the utility of final-for-references-only.

If you want to create an object that doesn't change then create an immutable object. (see the link)

I have to say that I personally don't make function parameters final because it's too much bother. I've found it looks like noise in the code and other developers resent it. Instead I have the compiler warn me if there's any assignment to a method parameter, which accomplishes the same thing.

If you are using eclipse you can turn that on by going to Window -> Preferences -> Java -> Compiler -> Errors/Warnings and turning on the "Code Style -> Parameter Assignment" warning. Go ahead, make it an "Error" if you dare.

Marking an object's members as final is much more useful. I've always though of object members as mini-global variables. Global variables are bad but global constants are less so. When followed religiously, it allows the maintainer to see at a glance which references are being mutated or ideally that non are. When combined with extensive use of immutable objects it also allows you to quickly see what is being mutated if anything.

While I don't bother marking method parameters final, I try and mark every class and object member I can as final. I've gone so far as to re-write code to mark more members final. I find it helps me understand my code and code of others faster than if I didn't and avoid errors too.

Friday, August 22, 2014

Evangelism in Software

"Don't worry about people stealing your ideas. If your ideas are any good, you'll have to ram them down people's throats." - Howard H. Aiken

Imagine that one day you're sitting in your office  coding away and suddenly you hit upon this idea for a glorious new framework that allows you to not only solve the problem but also problems you know that others are working on. You write up your framework, make sure it's unit tested and put it in the libraries project for all to use and you're done.

Well, no you're not. While the framework might be available no one knows about it yet. So, no problem, you send an email to the developer's mailing list and for whatever reason, people still aren't using it. No problem, you've done enough code reviews to know that people need a bit of background knowledge when using the library so you create a presentation that explains why it's the library is awesome and present it to everyone. It seems to go well, but even then, no one uses it. What is going on? Everyone keeps doing it the long way. What is going on?

Insight is special because it must be earned with experience; long hours getting burnt doing silly things over and over again until you see the light of a better way. The more revolutionary the insight the more likely that people won't understand or appreciate it.

For example, back in the late 1950s, John Backus and a small team at IBM developed Fortan. The first high level language capable for being used on the limited computers at the time. It was designed to make the act of programming quicker and easier. It was an incredible technical accomplishment. It was also treated dismissively by many in the industry who though it didn't make programming easier at all or that they didn't need it because they already found programming easy. They preferred to stick to assembly.



This pattern repeats with structured programming, object oriented programming, revision control, garbage collectors, lambdas and even today with functional programming.

The worst thing about any revolutionary technology is it makes your knowledge obsolete. There's a cost to throwing out what you know and the more you know the more you need to throw out. If you take a look at the list of things above, they were all very hyped as silver bullets when they came out. Not all of them lived up to the hype... and these were the ideas that succeed. Writing frameworks is hard and very few are done well. It's hardly surprising people are sceptical.

In order to get any change accepted you need to:

  1. Buy in from those who will be affected. Change isn't always good and you need people to believe that your change is a change for the better.
  2. It must be understandable. The stepper the learning curve the more you have to work to convince people it will be worth the payoff.
  3. Must have a significant payoff. The advantages of your new framework should be easy to articulate; work on your elevator statement.

Once you've made your first converts things get easier because they will evangelize on your behalf. At least they should. If you can't even get the people who work with you in the same domain to take an interest you might not have the revolutionary new framework you think you do. In this case listen to the feedback they are giving. They might have some insight that will make your framework better.

I can tell you, creating a new API that other people actually use is very difficult. It's as much a political challenge as it is a technical one. That said, sometimes the results of all that effort can be revolutionary.

Wednesday, August 20, 2014

Coding Standards - not blank!

In answer to the question: "Where did the blog post on coding standards go?". Much like the one ring it's still secret and it's still safe; I haven't finished writing it yet. I'm not sure why that blank post showed up but I think that I accidentally clicked the publish button before I started writing. It does work on symbolic level, though. The worst coding standard is the blank coding standard.

Every large team should have a coding standard that defines how that code should be formatted. These coding standards make code quicker and easier to read because the style is consistent and once you learn it your brain can work via pattern recognition - that is reading code by processing its shape not its content. It's the same sort of thing as the brain does when it comes to reading normal text. That's what makes "italics" and ALL CAPS more difficult to read. Your brain has to work harder because it can't just match shapes, it has to do some decoding first.

I don't agree with the theory that one coding standard is any better than the other. The most readable coding standard is the most familiar. K&R, BSD, or Gnome coding standard is easy to read because it's familiar. The only point I'd make is it would be nice if everyone used the same coding standard. Come to think of it, why aren't C-like languages defined in such a way that limits the different conventions possible.. Like the position of the {}? I suppose the original reason was that C was intended to provide a flexible toolkit for creating as many arguments as possible. Using C and its derivatives, it's possible to create lengthy discussions that go on for hours without conclusion. It's also possible to create bitter team divisions as the brain's legacy limbic system switches on and forms tribes around different ideals. Hallelujah! and Amen.

:-)


Thursday, August 14, 2014

Using Multiple Monitors

I've been an advocate of multiple monitors even since I realized that productivity wasn't decadence. I have three monitors attached to my computer all of them are Dell U2211-H panels. The U2211-H panel has a 16:9 aspect ration which makes them a little too wide. Text is best read in long, thin columns so the extra width is overkill for that. Also, if you have three 16:9 screens you need a really wide desk. 4:3 is the perfect ratio for multiple computer screens. If you're working on text all day, a portrait oriented 4:3 is even better since you can fit more text on the screen that way. It's also close to the ratio of a standard sheet of 8.5 X 11.

The main problem with 4:3 screens is that they are very expensive new. The rumour is that most computer monitors these days are simply a repackaging of HDTVs. Even the 16:10 aspect ratio is getting hard to find and 16:10 is better than 16:9.

My three 16:9 monitors are laid out with the left most one in portrait mode and the other two in landscape. I put my code editor and email client on the portrait screen and use the two landscape monitors for everything else. Having a 16:9 monitor in portrait mode is comically tall but works really well for code and emails. Using a 16:9 monitor in portrait mode allows me to see 95 lines of code on the screen at once (with a nice big font!) without having to scroll. To get the same effect from a larger 16:9 monitor you'd have to almost double the display size.


I use my computer for both gaming and coding. Having the leftmost monitor in portrait mode is a comprise. It means that whenever I code I move my mouse and keyboard over to the left so that this screen becomes my primary screen. Then, once the day is over, I slide back over to use my centre, landscape monitor for games and web browsing. My monitors are actually on a pivot so I should try just pivoting the centre monitor while I'm working then pivoting it back when I'm done. Maybe I should put all three monitors in portrait mode, since the bulk of what I do involves reading code or text. I'll have to try that over the next few days and get back to you.

Recent Window management features added in Windows 7 make super large displays a second option to the multi-monitor setup. Windows 7 has the ability to snap a window to the left or right half of the screen. This makes managing two windows on the screen at once far easier. The keyboard shortcut for this is windows key-left or right arrow.

That's really the only difference between multiple monitors or one massive monitor - software support. With multiple monitors the OS knows you want to use each monitor as a separate region so that works already. If you're using one giant monitor the OS has no idea what to do. Do you want one big window or a bunch of regions or what? Back when I bought my three monitors I didn't know about the Windows 7 snap features so, I chose multiple monitors rather than rely on software that might not be there. Oh well.. I still love my three monitors.


Wednesday, August 13, 2014

Software Development Estimates

Programmer estimates are notoriously bad. Having been in the position to give estimates and to be blocked by another's faulty estimate, the whole process is deeply frustrating for everyone.

I've been trying to figure out what is going on with task estimation for a while. There are plenty of theories on the internet. Many say programmers are just too optimistic. That's undoubtedly part of it but in terms of a theory with predictive and explanatory power it's like saying that programmers underestimate tasks because their estimates are always too short.

A group of psychologists first proposed a theoretical basis for.. I feel I should say "optimism".. in a 1979 paper called "Intuitive prediction: biases and corrective procedures". This pattern of optimist task estimation has shown up in tax form completion, school work, origami and a bunch of other things. From this we can conclude that programmers don't suck at task estimation, human beings suck at task estimation. A condemnation of humanity but an encouraging result for those who still cling to the ridiculous notion that programmers are people. The phenomenon is called the "Planning Fallacy" and Wikipedia has a great summary if you're interested. I estimate it will take you 5 seconds to read so go ahead.



Optimistic estimates are bad enough, but organizations will often add estimate distortions of their own.

  1. Fabricate a schedule without consulting anyone who will be taking part of the project.
  2. Ask developers for estimates and then cut them down because they are too high or those wacky developers are "padding" their estimates.
  3. Ask for estimates from developers without there being a detailed plan to estimate.
  4. Use estimates for a plan that's not being used anymore. Sometimes this happens because of additional features were added. Sometimes it's because the team you are integrating with hit a technical wall and had to change everything. Sometimes people are still using the pre-project estimates when the task estimates are available.
  5. Ask a lead developer for an estimate then give the job to an intern.

I've been on more projects than I can count and it didn't seem to matter who was on the project or how many times they've been through the process because at least one of these things happened. The last project I was on had the last three estimation pathologies show up at one point or another.

Once a project is behind schedule and you have a deadline to meet, all the options start to suck.


There have been many attempts to fix this estimate mess some more successful than others. The current best practice is to use some variant of agile/scrum. I say "some variant" because agile programming has many forms and not all of them are understood properly.

Agile software development turns the estimate problem on its head by admitting that estimates are likely to be wrong and that features will be added or removed so why not deal with it. The first thing agile does is to try and compute a fudge factor for the task estimates. The assumption is that the estimates are relatively accurate they are just off by some constant factor X. If you can figure out this factor you can multiply all estimates by X and get a more accurate number. In practice this helps but ins't a panacea.

The second thing Agile does is it seeks to minimize the problems bad estimates cause. This may seem like giving up because it is. As far as I can tell, no one in the software development industry has managed to solve this problem outside of some very special cases. The best strategy is to plan for bad estimates.

With agile, programming task estimates are re-calculated on a weekly basis so that estimates can include the latest information as its discovered. The focus is on keeping estimates up to date and communicating them instead of realizing a feature is too large when it's impossible to do anything about it. Additionally, features are done in order of importance; there's an ordered product backlog with the most critical things at the top of the pile. This way, developers know what is important and what isn't and can work with that in mind. When the deadline comes around you're guaranteed to have the important things done instead of whatever was fun to program.

It's way too hard to give a good summary of Agile here so I'm going to point to some resources:

  • Wikipedia's Agile Software Development page is a good starting point.
  • You can also look at Scrum. Most industry standard agile best practices are some variant of Scrum.
  • Joel Spolsky advocates his version which he calls Evidence Based Scheduling. I wouldn't normally include this but the page has a good explanation of where typical task estimation goes wrong.
  • There are quite a few consulting groups that can help too.


Developers need a feedback loop for their estimates. At some point after the feature has been implemented (this includes debugging!), developers should get feedback as to the original estimate and the actual time taken. Most agile tools will try to capture that but it might miss important aspects like design time or the time it took to gather requirements. In any case, this information should be explicitly presented to everyone on the team (including managers). Developers rarely consider how long - in real time - things are taking or how it relates to their estimates. Compiling and presenting the numbers to the team at the end of a project, when they are likely to be receptive, is enough to start this thinking process. It also communicates that the organization cares about the accuracy of estimates. If you're curious about why something took so long, this is a good place to have that discussion.

Out of the many projects I have been on, none of them have outright failed despite the usual optimistic estimates and estimation pathologies. This is because I have always worked within organizations that that were flexible enough to work with bad estimates. I am completely convinced that protecting your project against bad estimates is a realistic approach to managing estimate risk. That said, better estimates are always welcome so watch out for organizational estimation pathologies and make sure the developers realize how long their tasks are actually taking vs their estimates.


Thursday, August 7, 2014

Garbage Collectors

Hurray for garbage collectors!

There's a been quite a lot of work put into garbage collectors these last 10 years and they have gotten a great deal better. They now pause less and are more efficient then their forbearer and a good thing to. Common wisdom has it that there's a 2X productivity different between managing memory yourself and using a garbage collector system so it's a good thing they're more usable now.

I've been watching a video about modern garbage collectors online. It's called "Understanding Java Garbage Collection and what you can do about it". Take a look.

Tuesday, August 5, 2014

Why do You Always Seem to be Refactoring?

Wikipedia defines refactoring as:

Code refactoring is the process of restructuring existing computer code – changing the factoring – without changing its external behavior. Refactoring improves nonfunctional attributes of the software. Advantages include improved code readability and reduced complexity to improve source code maintainability, and create a more expressive internal architecture or object model to improve extensibility.
.. and this definition is good but it doesn't capture how refactoring fits into the overall picture of software development on a long-lived project. On a software project that has been going for a while most of what developers do is refactoring.

When you first start a brand new project there's tons of new code to write. There's the game engine and the code that talks to the website and then there's the website code itself. However, as the project continues you will find yourself transitioning from writing brand new things to reusing existing things in a new way.

What has happened is that over the years your project has built up a toolkit for dealing with problems in your domain. You don't need a "users" database because you already have one. Similarly you don't need a messaging system because one already exists. If you project has gone on long enough it even has an email client in there somewhere. The project becomes more about refactoring existing code to do new things and less about adding new code.

Why build when you can reuse? As Joel pointed out in his now classic article, re-writing something is surprisingly hard. I'll let him explain it:

Yes, I know, it's just a simple function to display a window, but it has grown little hairs and stuff on it and nobody knows why. Well, I'll tell you why: those are bug fixes. One of them fixes that bug that Nancy had when she tried to install the thing on a computer that didn't have Internet Explorer. Another one fixes that bug that occurs in low memory conditions. Another one fixes that bug that occurred when the file is on a floppy disk and the user yanks out the disk in the middle. That LoadLibrary call is ugly but it makes the code work on old versions of Windows 95.
Each of these bugs took weeks of real-world usage before they were found. The programmer might have spent a couple of days reproducing the bug in the lab and fixing it. If it's like a lot of bugs, the fix might be one line of code, or it might even be a couple of characters, but a lot of work and time went into those two characters.
When you throw away code and start from scratch, you are throwing away all that knowledge. All those collected bug fixes. Years of programming work.

Re-writing something is very time consuming with the added problems of violating "don't repeat yourself". These reasons are the reasons why you refactor a lot on a long-lived project.

Before I go I want to point out how this fits into the previous article on Technical Debt.

On a long lived project, a new feature will typically be implemented by changing the existing code base. All aspects of the existing codebase that aren't compatible with the new direction become technical debt. You refactor the code to get rid of the technical debt and magically you have the new feature with barely any new code. This last part can confuse people because refactoring isn't supposed to change external behaviour and it doesn't here either. What's changing is the software's organization. You're taking a design that was never intended to have the new feature and turning it into a design that expects the new feature. Once you do that, the feature can sometimes be implemented in just a handful of lines of code.

If you're curious about whether what you're doing is the sort of refactoring I'm talking about then read this article by Steve Rowe about when to refactor.

Until next time here is a picture of a  bunny:


Monday, August 4, 2014

Technical Debt

I'd like to take today's soapbox to explain the concept of technical debt.

During the development of an application a developer faces many decisions where he can choose to do what's right for the long term or what is expedient for the short term. For example, a programmer might fix a bug in a way that is quick to do but difficult to understand or he might re-write the code to fix the bug and also keep the code easy to understand. A typical developer makes these decisions multiple times per day. (I hope you trust your developers!) The path your developers choose determines things like how easy it is to add a feature, whether anything breaks when you do and how long the changes take to stabilize.

Every time a developers chooses to do things the quick way he slows himself (and everyone else) down in the future. The thing is, the harder he pushes the quicker things get done in the short term but the longer things take in the future. Over time he will be working very hard, taking all the shortcuts he can and not advancing at all.

Aspects of legacy code that slow you down are called technical debt. It works like ordinary, monetary debt. You can take shortcuts and spend more than you have for a while but if you keep doing it the interest will kill you.


Every project needs to hurry at some point. That's perfectly normal and it should be possibly to take on technical debt for a short while. However, if you keep doing that your software project will eventually stall; you'll reach project bankruptcy.

Technical debt is usually accrued by taking shortcuts, but this is not the only way. You can also get technical debt by changing the requirements. Every time you add a feature or otherwise change the behaviour of a piece of existing software it requires changing how that software works. If many changes are required then those required changes become technical debt.

On large projects, relatively simple changes can cause a cascade of modifications across the whole software project. For example, a new feature might require a change on the backend, which might require a change to the database, which might require a change to the database update script. A seemingly small change might cause a crisis level change by the time it hits the database. For example, supporting a characters with accents not expressible in CP-1253 would require upgrading the database to UTF-8.. which might not be easy if your database engine is very old. Suddenly a system that has worked fine for years becomes a big blob of technical debt.

On these large, long running projects, the hardest part of any enhancement is integrating with all the stuff that's already there. Not only that, but changing one thing triggers a cascade of changes everywhere. It's not an exaggeration to say that many large software projects tend to get stuck in the mud due to bad technical debt management. The way to avoid this is to have a plan to address at the technical debt pain points. (I've stolen this from Wikipedia):

Causes for technical debt

  • Business pressures, where the business considers getting something released sooner before all of the necessary changes are complete, builds up technical debt comprising those uncompleted changes.
  • Lack of process or understanding, where businesses are blind to the concept of technical debt, and make decisions without considering the implications.
  • Lack of building loosely coupled components, where functions are not modular, the software is not flexible enough to adapt to changes in business needs.
  • Lack of test suite, which encourages quick and risky band-aids to fix bugs.
  • Lack of documentation, where code is created without necessary supporting documentation. That work to create the supporting documentation represents a debt that must be paid.
  • Lack of collaboration, where knowledge isn't shared around the organization and business efficiency suffers, or junior developers are not properly mentored
  • Parallel development at the same time on two or more branches can cause the buildup of technical debt because of the work that will eventually be required to merge the changes into a single source base. The more changes that are done in isolation, the more debt that is piled up.
  • Delayed refactoring – As the requirements for a project evolve, it may become clear that parts of the code have become unwieldy and must be refactored in order to support future requirements. The longer that refactoring is delayed, and the more code is written to use the current form, the more debt that piles up that must be paid at the time the refactoring is finally done.
  • Lack of knowledge, when the developer simply doesn't know how to write elegant code

  • Have you had this experience recently?



    Continues in the next article: Why do You Seem to Always be Refactoring?

    Friday, August 1, 2014

    Complex Logic in Unit Tests

    There is a rule somewhere in the land of best practices that says you should avoid putting logic in Unit tests. For example, if you were making a suite of unit tests that had some common logic then you wouldn't create a method to reuse code because that would lead to the test becoming harder to read and understand.

    The idea is that unit tests should read like a behavioural specification. The test should be so simple and clear that it tells the reader exactly how the function should behave without having to poke around inside different functions or crank a for loop mentally to guess what happens.

    Basically,

    1. Don't worry so much about Don't Repeat Yourself because when you put things into variables, constants or functions you make people hunt down these pieces of code and it makes the test harder to read.
    2. Don't use fancy flow control like loops if you can possible avoid it because it means the user has to work out what the flow control is doing and that hurts readability.
    I've tried this approach for several years and I can tell you that's it's all crap. The ultimate result is an unmaintainable mess.

    I feel I've heard these unit test arguments before. Somewhere in our codebase there's a pile of older C/C++ code. It contains functions that are very long; pages and pages of source code per function. I remember this coding style being considered best practice by some, not just for performance but because it made the code easier to read precisely for the same reasons that are now being used for unit tests. In retrospect, it doesn't make anything easier to read and it makes code really hard to maintain.

    Code is code. The reasoning behind Don't Repeat Yourself doesn't magically get suspended because you're writing a unit test. Whenever you're writing code you're constantly building abstractions that help you to solve the problem you're working on. Things like Don't Repeat Yourself allow you to find and leverage these cases of code reuse into tiny frameworks. Unit tests are no exception to this. If your code is maintainable and well factored it will be intrinsically readable because you'll have 5 well chosen lines of code instead of a page of boilerplate.

    Here's an article about not putting logic in tests from Google's testing blog. In this article the example the author chooses is the following:


    @Test public void shouldNavigateToPhotosPage() {
      String baseUrl = "http://plus.google.com/";
      Navigator nav = new Navigator(baseUrl);
      nav.goToPhotosPage();
      assertEquals(baseUrl + "/u/0/photos", nav.getCurrentUrl());
    }

    In this test there is an error. Can you spot it? Ok, let's write the test without the code reuse:

    @Test public void shouldNavigateToPhotosPage() {
      Navigator nav = new Navigator("http://plus.google.com/");
      nav.goToPhotosPage();
      assertEquals("http://plus.google.com//u/0/photos", nav.getCurrentUrl()); // Oops!
    }

    The error is now obvious. Hurrah for no logic in tests!

    However, what if you didn't spot it? What if the error was "photso" instead of a double slash? I don't know about you, but there are a bunch of words I always type wrong. Things like spelling height as "hieght" or "heigth". What if the URL was http://www.heigth.com/? In that case I would be testing a typo... but only sometimes.... If you're not reusing then you're duplicating so this error is going to be in a bunch of places and you have to find and fix them all.

    For that matter, aren't there libraries to append a path on a base url already? Modifying example #1 is easier than example #2 because in the first case we know that the base URL is common text because it's explicitly marked and check by the compiler right there in the code. In the second example, how do we know what the text in assertEquals() is supposed to represent without doing a tedious, manual character by character comparison?

    If logic in unit tests has you concerned about making errors then don't worry. You'll make plenty of errors no matter how you do things. In one case it's a single error in one part of the code has to be code reviewed once and fixed once. If you repeat yourself it's like the 1970s and code is duplicated all over the place with slight differences it has to be fixed many times and code reviewed until your eyes bleed. If logic in unit tests really bothers you, make your complex logic into re-usable functions and unit tests that. Yes, it's a little "meta" but at least you're being honest about the complexity instead of copying around madness without worry.

    To reiterate, best practices like "Don't Repeat Yourself" came into existing to make reading and maintaining all code easier. I can tell you from personal experience that ignoring it results in difficult to maintain tests. 

    What you really need to do is make what you're doing in your test as clear as possible - don't hide your function's inputs and outputs. I can't go into detail here but here's what the test would look like if I wrote it:


    @Test public void shouldNavigateToPhotosPage() {
      mNav.goToPhotosPage(); // mNav is a member defined in the fixture
      assertEquals(BASE_URL.withPath("/u/0/photos").toUrlString(), 
                   mNav.getCurrentUrl());
    }


    In the example above I've created a test fixture (not displayed). An mNav member variable is built in the test fixture's setUp(). The mNav is always initialized with a base URL. I've made a BASE_URL object using an existing library. In this library BASE_URL is immutable but has several methods that return a mutated (also immutable) object.

    The net result is you can see that calling goToPhotosPage() should go to the "/u/0/photos" path.

    If mNav had a getBaseUrl() and it was tested elsewhere then you might use that instead of BASE_URL constant since it's more explicit about what the expectation is.

    Don't worry about logic in unit tests - that is a red herring -worry about making what you're testing clear to the reader.

    Wednesday, July 30, 2014

    Lambdas in Java 8 Aren't Just Inner Classes

    I've been watching quite a few instruction videos lately. One of my favorites is about how lambdas are implemented in the JVM. One would think they simply implemented lambdas as inner classes but they didn't. They used a the fancy invoke dynamic instruction to get more performance out of it. invoke dynamic is incredibly powerful. It's actually a JVM instruction that runs a custom method to figure out how to resolve the method call then caches it if the custom code says it's possible.

    Check it out: Lambdas under the hood

    For those whose eyeballs are allergic to video there's also this document which does a good job of explaining things too: Translation of Lambda Expressions


    Tuesday, July 29, 2014

    Latest Android has Issues with Transparent GIFs

    So I spent much of the day running around in circles trying to figure out why my game wasn't rendering transparent regions. After way too much time on this issue I eventually realized that it was a bug in Android Kitkat 4.4! How fantastically infuriating.

    Well, thankfully that wasn't the only thing I did today. I also got myself all confused by Android's "density" feature. Basically, since there are a billion different devices out there and between them they all have widely different screen sizes and resolutions, Android will automatically scale bitmaps to account for the different screen DPIs. So that means that if you's using a phone (like mine) that has a very large number of pixels but is especially small, Android knows that it shouldn't make everything tiny. Usually, if you have a huge number of pixels available, software will assume it's because your screen is huge! This isn't true an high resolution smart phones so Android will make you pictures larger (in pixels) to compensate.

    If you're just putting a picture on the screen this automatic scaling is really handy. If you've got your own 2D game engine this is more confusing than anything. Let's see, so I need to scale everything so it fills the screen but then all my sprites are huge because Android is already scaling them to account for the original high dpi. Double scaling!

    Personally, I love high resolution displays. I would dearly love to have that on my windows box. You can sort-of do this with Windows 7 by manipulating the DPI setting, although this is mostly for text... and it breaks gadgets in Windows 7 thanks to an Exploder 11 update. Windows 8.1 apparently does it so the next screen I buy can be super high resolution.

    Of course, the Mac side has been doing this for a while with the "retina" displays on MacBook pros. Super high resolution but without everything becoming tiny. Love it! Want more!


    Monday, July 28, 2014

    Static Methods are Awesome!

    On my travels around the internet I occasionally see articles that loudly proclaim that "static functions are evil".



    The reasons given are:

    1. They can't be tested
    2. They can't be mocked
    3. They make the code more difficult to understand
    4. They have hidden side effects
    Every time I see articles like this I figure I must have arrived in Opposite Land because this advice is in complete contradiction to my two decades of programming experience.

    Static functions are awesome because:
    1. They are easier to test
    2. They are just as easy to mock
    3. They make the code much clearer
    4. They have no side effects
    The disagreement comes down to what a static function represents..

    This is the typical example of why a static function is hard to mock:

    public class Foo {
      public void doSomething() {
        // stuff...
        int result = Utils.doSomething(parameter);
        // more stuff..
      }
    }

    There's no easy way to mock Utils.doSomething(..). Apparently, PowerMock can mock static methods but let's assume we're not using any special voodoo.

    If we take our original example and replace the static method with an object instantiation:

    public class Foo {
      public void doSomethingFancy() {
        // stuff...
        int result = new Utils().doSomething(parameter);
        // more stuff..
      }
    }

    ..we're still not mockable. Is it because constructors are evil? No, constructors are not evil. Creating a new object at this point in the code is.. well, not evil. .Let's just say it's not conducive to mocking. What if we do this?

    public class Foo {
      private final Utils mUtils;

      public Foo(Utils utils) {
        mUtils = utils;
      }

      public void doSomethingFancy() {
        // stuff...
        int result = mUtils.doSomething(parameter);
        // more stuff..
      }
    }

    Great! Now we have code that's mockable, except we now have code that is misleading and coupled .

    First off, we have an instantiable Utils class. This violates a common naming convention of the Java community. The Java community adds "Utils" or "Utilities" to signal that a class contains a bunch of static methods. This Utils class is meant to be instantiated and then used like a collection of static methods. By my calculations the silliness quotient outweighs the added mockability by 34.23%.

    Secondly, it's not clear which methods of our now badly named Utils class are being used and which methods are just along for the ride. In real code, where the Foo class might be 500 lines long, we would have to go poking around and try to understand how the Foo class was using the Utils class. Which methods do we actually need to mock and which are noise? Why not use a custom interface instead? This is called the Strategy design pattern. Here's what the Foo class would look like with a Strategy pattern:

    public class Foo {
      private final DoSomethingStrategy mStrategy;

      public Foo(DoSomethingStrategy strategy) {
        mStrategy = utils;
      }

      public void doSomethingFancy() {
        // stuff...
        int result = mStrategy.doSomething(parameter);
        // more stuff..
      }

      // custom interface defines only what we need!
      public interface DoSomethingStrategy {
        int doSomething(String parameter);
      }
    }

    ... and this is how I would have instantiated the Foo class in Java 1.7:

      Foo foo = new Foo(
        new DoSomethingStrategy() {
          public int doSomething(String parameter) {
            return Utils.doSomething(parameter);
          }
        } );

    Thankfully Java 1.8 has gotten rid of all this pointless extra syntax and it becomes:

      Foo foo = new Foo(Utils::doSomething);

    We're passing a static function to a class as a strategy!

    What if you have a Strategy with multiple methods? You can either define two Strategy interfaces (which is fine) or bite the bullet and do it the long way:

      public DoSomethingStrategy {
        int doSomething(String parameter);
        int doSomethingElse(Long parameter);
      }

      Foo foo = new Foo(
        new DoSomethingStrategy() {
          public int doSomething(String parameter) {
            return Utils.doSomething(parameter);
          }

          public long doSomething(long parameter) {
            return CustomMathUtils.calculateSomething(parameter);
          }
        } );

    It doesn't matter if your interface implementation is in a static method or in a class, you should always apply the Strategy pattern when implementing mockable Strategies - the cost is so low and the code readability is so much better. 

    Before I go I would like to talk about the advantages of static methods.

    Ideally your static functions should be Pure Functions. If you static methods aren't pure functions then you're evil. They don't have to be devoid of all side effect like printing out a log trace but the closer the better. The only dependencies in static function should be listed as parameters. This means that for any given set of parameters the function will always return the same result (aka: idempotent). No weird side effects or black magic shenanigans allowed. Once you're thinking in pure functions you're on your way to Functional Programming design patterns. Each Pure Function is by definition a testable chunk.

    Let's say you're refactoring a BigClass. It's over 1000 lines and is a mix of miscellaneous stuff. Typically, in these cases, there's some fairly self contained code that implements fancy algorithms along with some state-full code that implements the objects interface. Where to start? First thing I do in this case is to try and find the lurking stateless algorithm code and pull that out. The way you do that is you look for the hidden static functions. These are functions that don't touch the object's meber variables. Code like this:

    private void foo() {
      mMemeberVar = 7;
      mOtherMemeberVar  = calculateStuff(mOtherMemeberVar);
      mCancelled = false;
    }

    Would be a statefull since it uses object member variables (that's the "m" prefix means). calculateStuff(..) probably doesn't access object member variables but we'd have to poke through a big pile of code to make sure. Either that or we could mark it as "static" and see what breaks.

    At my day job I work on a team with 20 other developers and we've more-or-less followed the practice making any method that doesn't rely on object member variable static. The rule is, if it can be static then mark it as static. This is a Godsend when you're trying to refactor some BigClass and you see a bunch of static methods that you can pull out into a private BigClassUtils. Hurray 300 lines of static methods that I no longer have to wade through to find the good stuff.

    Since we've pulled out a big blob of Pure Functions from BigClass why don't we add unit tests for them? I mean, they're all stand alone chunks of code otherwise they couldn't be static methods. So now we can unit test these static methods without worrying about weird dependencies or side effects. Not only that but calls to BigClassUtils are good candidates for the BigClassStrategy interface as well.

    Still worried about static methods calling each other at nausenum? Good! Because you can fix that using the Strategy pattern too:

    static void List foo( List fromList ,
                                   Function transform ) {
      ArrayList result = new ArrayList();
      foreach( item : fromList ) {
        result.add(transform.apply(item));
      }
      return result:
    }

    In this case the param "transform" is action like a Strategy. 

    You call foo() like this:

    List integers = foo(strings, Utils::doSomething);

    We're passing static functions to static functions!

    Before I continue I should mention that while static methods are awesome, static class members are evil. Ideally the static functions should be pure functions although you don't need to get eliminate all side effects. Just the ones that mutate program structure.

    What you should do, is go into your compiler settings and tell the compiler to emit a warning on every static member variable. There are very few cases where it makes sense to have static member variables and in all cases they should be wrapped in a singleton. Ideally a Singleton with no public static getInstance(). 

    I can't tell you the amount of pain Singletons have caused. If you're on a large team and create a singleton it's very hard to stop team members from just reaching into the singleton bag and accessing it directly.That means you're in the middle of testing some code and suddenly the default Singleton implementation has come to life and it's causing some of your tests to access the production server. doh.

    Sure, you can mock the singleton but that is a temporary fix at best. At some point there will be code that needs the default singleton implementation or forgets to set the implementation or leaks some resource or some such issue and you'll spent your week-end trying to figure why this only breaks on the build server. If you parametrize all your dependencies it's far easier to debug. Everything is right there in front of you. You'll have fairly large constructors but I always say "that's what coupling looks like". If you want small constructors stop coupling things together (alternatively use any number of techniques to add layers between your class and the stuff you're using, potentially reducing your argument count to 1 - your Strategy).

    I hope to have shown that it's not static functions that are evil it's badly written static functions that are evil. Static methods with static state or not using Strategies appropriately. Static functions should represent Pure Functions. Static functions are just fine, if you're not writing Pure Functions it's because you're evil. It's similar to "the force" in star wars; you must always try and stay on the light side of static functions or you'll have your hand cut off by your dad.



    And now, here is the picture of a late model Sherman tank: