I've been watching quite a few instruction videos lately. One of my favorites is about how lambdas are implemented in the JVM. One would think they simply implemented lambdas as inner classes but they didn't. They used a the fancy invoke dynamic instruction to get more performance out of it. invoke dynamic is incredibly powerful. It's actually a JVM instruction that runs a custom method to figure out how to resolve the method call then caches it if the custom code says it's possible.
Check it out: Lambdas under the hood
For those whose eyeballs are allergic to video there's also this document which does a good job of explaining things too: Translation of Lambda Expressions
Wednesday, July 30, 2014
Tuesday, July 29, 2014
Latest Android has Issues with Transparent GIFs
So I spent much of the day running around in circles trying to figure out why my game wasn't rendering transparent regions. After way too much time on this issue I eventually realized that it was a bug in Android Kitkat 4.4! How fantastically infuriating.
Well, thankfully that wasn't the only thing I did today. I also got myself all confused by Android's "density" feature. Basically, since there are a billion different devices out there and between them they all have widely different screen sizes and resolutions, Android will automatically scale bitmaps to account for the different screen DPIs. So that means that if you's using a phone (like mine) that has a very large number of pixels but is especially small, Android knows that it shouldn't make everything tiny. Usually, if you have a huge number of pixels available, software will assume it's because your screen is huge! This isn't true an high resolution smart phones so Android will make you pictures larger (in pixels) to compensate.
If you're just putting a picture on the screen this automatic scaling is really handy. If you've got your own 2D game engine this is more confusing than anything. Let's see, so I need to scale everything so it fills the screen but then all my sprites are huge because Android is already scaling them to account for the original high dpi. Double scaling!
Personally, I love high resolution displays. I would dearly love to have that on my windows box. You can sort-of do this with Windows 7 by manipulating the DPI setting, although this is mostly for text... and it breaks gadgets in Windows 7 thanks to an Exploder 11 update. Windows 8.1 apparently does it so the next screen I buy can be super high resolution.
Of course, the Mac side has been doing this for a while with the "retina" displays on MacBook pros. Super high resolution but without everything becoming tiny. Love it! Want more!
Well, thankfully that wasn't the only thing I did today. I also got myself all confused by Android's "density" feature. Basically, since there are a billion different devices out there and between them they all have widely different screen sizes and resolutions, Android will automatically scale bitmaps to account for the different screen DPIs. So that means that if you's using a phone (like mine) that has a very large number of pixels but is especially small, Android knows that it shouldn't make everything tiny. Usually, if you have a huge number of pixels available, software will assume it's because your screen is huge! This isn't true an high resolution smart phones so Android will make you pictures larger (in pixels) to compensate.
If you're just putting a picture on the screen this automatic scaling is really handy. If you've got your own 2D game engine this is more confusing than anything. Let's see, so I need to scale everything so it fills the screen but then all my sprites are huge because Android is already scaling them to account for the original high dpi. Double scaling!
Personally, I love high resolution displays. I would dearly love to have that on my windows box. You can sort-of do this with Windows 7 by manipulating the DPI setting, although this is mostly for text... and it breaks gadgets in Windows 7 thanks to an Exploder 11 update. Windows 8.1 apparently does it so the next screen I buy can be super high resolution.
Of course, the Mac side has been doing this for a while with the "retina" displays on MacBook pros. Super high resolution but without everything becoming tiny. Love it! Want more!
Monday, July 28, 2014
Static Methods are Awesome!
On my travels around the internet I occasionally see articles that loudly proclaim that "static functions are evil".
The reasons given are:
- They can't be tested
- They can't be mocked
- They make the code more difficult to understand
- They have hidden side effects
Every time I see articles like this I figure I must have arrived in Opposite Land™ because this advice is in complete contradiction to my two decades of programming experience.
Static functions are awesome because:
- They are easier to test
- They are just as easy to mock
- They make the code much clearer
- They have no side effects
The disagreement comes down to what a static function represents..
This is the typical example of why a static function is hard to mock:
public class Foo {
public void doSomething() {
// stuff...
int result = Utils.doSomething(parameter);
// more stuff..
}
}
There's no easy way to mock Utils.doSomething(..). Apparently, PowerMock can mock static methods but let's assume we're not using any special voodoo.
If we take our original example and replace the static method with an object instantiation:
public class Foo {
public void doSomethingFancy() {
// stuff...
int result = new Utils().doSomething(parameter);
// more stuff..
}
}
..we're still not mockable. Is it because constructors are evil? No, constructors are not evil. Creating a new object at this point in the code is.. well, not evil. .Let's just say it's not conducive to mocking. What if we do this?
public class Foo {
private final Utils mUtils;
public Foo(Utils utils) {
mUtils = utils;
}
public void doSomethingFancy() {
// stuff...
int result = mUtils.doSomething(parameter);
// more stuff..
}
}
Great! Now we have code that's mockable, except we now have code that is misleading and coupled .
First off, we have an instantiable Utils class. This violates a common naming convention of the Java community. The Java community adds "Utils" or "Utilities" to signal that a class contains a bunch of static methods. This Utils class is meant to be instantiated and then used like a collection of static methods. By my calculations the silliness quotient outweighs the added mockability by 34.23%.
Secondly, it's not clear which methods of our now badly named Utils class are being used and which methods are just along for the ride. In real code, where the Foo class might be 500 lines long, we would have to go poking around and try to understand how the Foo class was using the Utils class. Which methods do we actually need to mock and which are noise? Why not use a custom interface instead? This is called the Strategy design pattern. Here's what the Foo class would look like with a Strategy pattern:
public class Foo {
private final DoSomethingStrategy mStrategy;
public Foo(DoSomethingStrategy strategy) {
mStrategy = utils;
}
public void doSomethingFancy() {
// stuff...
int result = mStrategy.doSomething(parameter);
// more stuff..
}
// custom interface defines only what we need!
public interface DoSomethingStrategy {
int doSomething(String parameter);
}
}
... and this is how I would have instantiated the Foo class in Java 1.7:
Foo foo = new Foo(
new DoSomethingStrategy() {
public int doSomething(String parameter) {
return Utils.doSomething(parameter);
}
} );
Thankfully Java 1.8 has gotten rid of all this pointless extra syntax and it becomes:
Foo foo = new Foo(Utils::doSomething);
We're passing a static function to a class as a strategy!
What if you have a Strategy with multiple methods? You can either define two Strategy interfaces (which is fine) or bite the bullet and do it the long way:
public DoSomethingStrategy {
int doSomething(String parameter);
int doSomethingElse(Long parameter);
}
Foo foo = new Foo(
new DoSomethingStrategy() {
public int doSomething(String parameter) {
return Utils.doSomething(parameter);
}
public long doSomething(long parameter) {
return CustomMathUtils.calculateSomething(parameter);
}
} );
It doesn't matter if your interface implementation is in a static method or in a class, you should always apply the Strategy pattern when implementing mockable Strategies - the cost is so low and the code readability is so much better.
Before I go I would like to talk about the advantages of static methods.
Ideally your static functions should be Pure Functions. If you static methods aren't pure functions then you're evil. They don't have to be devoid of all side effect like printing out a log trace but the closer the better. The only dependencies in static function should be listed as parameters. This means that for any given set of parameters the function will always return the same result (aka: idempotent). No weird side effects or black magic shenanigans allowed. Once you're thinking in pure functions you're on your way to Functional Programming design patterns. Each Pure Function is by definition a testable chunk.
Let's say you're refactoring a BigClass. It's over 1000 lines and is a mix of miscellaneous stuff. Typically, in these cases, there's some fairly self contained code that implements fancy algorithms along with some state-full code that implements the objects interface. Where to start? First thing I do in this case is to try and find the lurking stateless algorithm code and pull that out. The way you do that is you look for the hidden static functions. These are functions that don't touch the object's meber variables. Code like this:
private void foo() {
mMemeberVar = 7;
mOtherMemeberVar = calculateStuff(mOtherMemeberVar);
mCancelled = false;
}
Would be a statefull since it uses object member variables (that's the "m" prefix means). calculateStuff(..) probably doesn't access object member variables but we'd have to poke through a big pile of code to make sure. Either that or we could mark it as "static" and see what breaks.
At my day job I work on a team with 20 other developers and we've more-or-less followed the practice making any method that doesn't rely on object member variable static. The rule is, if it can be static then mark it as static. This is a Godsend when you're trying to refactor some BigClass and you see a bunch of static methods that you can pull out into a private BigClassUtils. Hurray 300 lines of static methods that I no longer have to wade through to find the good stuff.
Since we've pulled out a big blob of Pure Functions from BigClass why don't we add unit tests for them? I mean, they're all stand alone chunks of code otherwise they couldn't be static methods. So now we can unit test these static methods without worrying about weird dependencies or side effects. Not only that but calls to BigClassUtils are good candidates for the BigClassStrategy interface as well.
Still worried about static methods calling each other at nausenum? Good! Because you can fix that using the Strategy pattern too:
static void List foo( List fromList ,
Function transform ) {
ArrayList result = new ArrayList();
foreach( item : fromList ) {
result.add(transform.apply(item));
}
return result:
}
In this case the param "transform" is action like a Strategy.
You call foo() like this:
List integers = foo(strings, Utils::doSomething);
We're passing static functions to static functions!
Before I continue I should mention that while static methods are awesome, static class members are evil. Ideally the static functions should be pure functions although you don't need to get eliminate all side effects. Just the ones that mutate program structure.
What you should do, is go into your compiler settings and tell the compiler to emit a warning on every static member variable. There are very few cases where it makes sense to have static member variables and in all cases they should be wrapped in a singleton. Ideally a Singleton with no public static getInstance().
I can't tell you the amount of pain Singletons have caused. If you're on a large team and create a singleton it's very hard to stop team members from just reaching into the singleton bag and accessing it directly.That means you're in the middle of testing some code and suddenly the default Singleton implementation has come to life and it's causing some of your tests to access the production server. doh.
Sure, you can mock the singleton but that is a temporary fix at best. At some point there will be code that needs the default singleton implementation or forgets to set the implementation or leaks some resource or some such issue and you'll spent your week-end trying to figure why this only breaks on the build server. If you parametrize all your dependencies it's far easier to debug. Everything is right there in front of you. You'll have fairly large constructors but I always say "that's what coupling looks like". If you want small constructors stop coupling things together (alternatively use any number of techniques to add layers between your class and the stuff you're using, potentially reducing your argument count to 1 - your Strategy).
I hope to have shown that it's not static functions that are evil it's badly written static functions that are evil. Static methods with static state or not using Strategies appropriately. Static functions should represent Pure Functions. Static functions are just fine, if you're not writing Pure Functions it's because you're evil. It's similar to "the force" in star wars; you must always try and stay on the light side of static functions or you'll have your hand cut off by your dad.
And now, here is the picture of a late model Sherman tank:
Thursday, July 24, 2014
Too many tabs - can't find things?
I love tabbed browsing to the point where I typically have three dozen tabs open in multiple windows. This makes it hard to find things. I mean, I *know* I have a netflix window in there somewhere but it's going to take me a good 5 minutes to find it. This is a job for Tab Ahead!
With Tab Ahead, all you need to do hit Alt-T and type in a search string. If I was looking for my Netflix tab I would type "netflix" then check the search results for the tab I'm looking for. If it's there then I can press return to jump to it. Tab Ahead will search all open tabs (in the current window or across all windows) for tabs containing that string be it int the title of text of a web page.
For Firefox, the best I could find is TabHunter. A prehistoric extension that works ok but is showing its age. If anyone finds something better for Firefox please let me know!
With Tab Ahead, all you need to do hit Alt-T and type in a search string. If I was looking for my Netflix tab I would type "netflix" then check the search results for the tab I'm looking for. If it's there then I can press return to jump to it. Tab Ahead will search all open tabs (in the current window or across all windows) for tabs containing that string be it int the title of text of a web page.
For Firefox, the best I could find is TabHunter. A prehistoric extension that works ok but is showing its age. If anyone finds something better for Firefox please let me know!
Tuesday, July 22, 2014
Invasive Ad networks on Android
So there I was, looking through all the Android ad networks trying to see if any made "sense" for the Android application I was developing when I ran across sellAring. It changes the "ring" sound played when you make a call to an ad. That has got to be annoying.. assuming you make calls with your phone... which is sooooo y2k.
This is part of a bigger problem of free android apps that do very shady things. Free isn't free so to speak, you have to shovel through the muck.
To help I found AppBrain (Airpush?) Ad Detector. This little application can help you keep track of:
This is part of a bigger problem of free android apps that do very shady things. Free isn't free so to speak, you have to shovel through the muck.
To help I found AppBrain (Airpush?) Ad Detector. This little application can help you keep track of:
- Applications that require access to private data
- The ad networks used by the installed applications
- The social network SDKs used (like facebook or google plus etc..)
- The SDKs used by your installed applications (apparently 2 applications I have use Apache Commons I/O)
- Keeps track of notifications and allows you to block them.
Monday, July 21, 2014
OMG! It's OGF! How to Gauge Code Quality
Little while ago we were having trouble figuring out a way of determining code quality. Sure you could use metrics produced by tools like check style or unit test code coverage tools but I've never found these metrics to tell the whole picture. Technical debt and code quality are multifaceted problems that require the skills and experience of senior engineers. It's unlikely that any computer program will ever be devised that can give an accurate picture on code quality.
It's very easy to create a computer program that can find monstrously awful code. Some might call it the compiler. It's more difficult to create a computer program that can find merely mediocre code. If you're at an organization that's worried about code quality odds are you don't have monstrously awful code to worry about.
In QA they use a metric called overall good feeling (or OGM for the acronym obsessed. ie: aka OGF). The concept behind this is very simple: you just give your overall feeling as to the quality of the product as a number from 1 to 5. Five is very high confidence and one is no confidence. The reason we use this system is because we had trouble determining the quality of our products using metrics alone. You could use bug counts, regression counts and similar things to try and create an objective measure of the quality of a product but this will never give you the full picture. Why not just ask? OGF is a great way of polling the intuition of the people who are responsible for testing the product. Why not use the same technique for measuring code quality?
Let's say we wanted to figure out the quality of the code that makes up some module, let's call it module A. First we gather the relevant developers together in a room. The next step is we asked them all to come up with a number between one and five (where five is excellent code quality and one is terrible code quality) that best encapsulates code quality of the overall module. All the developers would then produce a number at the same time. The best way to do this is to use a system similar to planning poker where all the developers have five playing cards that go between one and five. Why not using playing cards? The developers first select the card that corresponds to their number and put put it on the table. When everyone has chosen, the cards are turned over at the same time. The point of doing it this way is you want all developers to poll their intuitions without being affected (infected?) by the views of their peers.
Of course, this number doesn't tell the whole story. It's also important to know how familiar developer is a piece of code. This familiarity quotient can give us insight into the developer's choice. Code quality number (aka CQN) the developers can rate their own familiarity with the code (cards aren't needed fo this step).
Let's assume that we have a group of developers who have given code quality and familiarity numbers for our module A. We now graph each developer's point on a graph like below:
If a developer is very familiar with the code and rates the code quality very highly we would get a point on the graph like this:
If the developer is not very familiar with the code and thinks the code quality is terrible we would get a point on the graph like this:
Once all the points of all the developers are graph we can see patterns very easily. For instance, this is good code:
This, on the other hand, is bad code:
However, I expect other patterns as well. These "other" patterns indicate a lack of convergence. But why?
Code with a steep learning curve might look like this:
Graphs like the above might also indicate that the cluster of developers who wrote the code like it but no one else can make heads or tails of it. It could be that the developers have written it in an idiosyncratic style (I know! I'll parse this using Perl and a banana!) or it might simply be intrinsically complex. Either way it's going to cause problems because new developers will have a tremendously difficult time learning how to interact with the code (What? I need a banana?).
If a module is simple and easy to understand but doesn't address edge cases in the design space we might get a graph that looks like this:
This code is more dangerous because developers just coming to the code feel that it should be easy to change and modify. However, anyone who's spent time with code will realize that it's a pain to make any changes work.
Controversial code would look like this:
There are many potential reasons why this could happen. All of them are worth investigating.
The following is code that everyone is afraid to touch. I call it "Haunted House Code":
...because no one goes in there. Most likely all the developers who wrote this code have moved on. Graphs like this imply estimates will be random numbers and that much work will be done before the real difficulty of the task emerges.
So, in conclusion, I believe that while code metrics are very useful I don't believe they can give a completely accurate story. I think that simply asking the developers what they think of the code quality is a valid metric. They are using it every day, after all. They are the most qualified people to give an assessment. It's important to know where the crap is buried because it is these pieces of code that will give you problems when you try to add new features. Software development is a minefield, think of these graphs is a mine detector.
It's very easy to create a computer program that can find monstrously awful code. Some might call it the compiler. It's more difficult to create a computer program that can find merely mediocre code. If you're at an organization that's worried about code quality odds are you don't have monstrously awful code to worry about.
In QA they use a metric called overall good feeling (or OGM for the acronym obsessed. ie: aka OGF). The concept behind this is very simple: you just give your overall feeling as to the quality of the product as a number from 1 to 5. Five is very high confidence and one is no confidence. The reason we use this system is because we had trouble determining the quality of our products using metrics alone. You could use bug counts, regression counts and similar things to try and create an objective measure of the quality of a product but this will never give you the full picture. Why not just ask? OGF is a great way of polling the intuition of the people who are responsible for testing the product. Why not use the same technique for measuring code quality?
Let's say we wanted to figure out the quality of the code that makes up some module, let's call it module A. First we gather the relevant developers together in a room. The next step is we asked them all to come up with a number between one and five (where five is excellent code quality and one is terrible code quality) that best encapsulates code quality of the overall module. All the developers would then produce a number at the same time. The best way to do this is to use a system similar to planning poker where all the developers have five playing cards that go between one and five. Why not using playing cards? The developers first select the card that corresponds to their number and put put it on the table. When everyone has chosen, the cards are turned over at the same time. The point of doing it this way is you want all developers to poll their intuitions without being affected (infected?) by the views of their peers.
Of course, this number doesn't tell the whole story. It's also important to know how familiar developer is a piece of code. This familiarity quotient can give us insight into the developer's choice. Code quality number (aka CQN) the developers can rate their own familiarity with the code (cards aren't needed fo this step).
Let's assume that we have a group of developers who have given code quality and familiarity numbers for our module A. We now graph each developer's point on a graph like below:
If a developer is very familiar with the code and rates the code quality very highly we would get a point on the graph like this:
If the developer is not very familiar with the code and thinks the code quality is terrible we would get a point on the graph like this:
Once all the points of all the developers are graph we can see patterns very easily. For instance, this is good code:
This, on the other hand, is bad code:
However, I expect other patterns as well. These "other" patterns indicate a lack of convergence. But why?
Code with a steep learning curve might look like this:
Graphs like the above might also indicate that the cluster of developers who wrote the code like it but no one else can make heads or tails of it. It could be that the developers have written it in an idiosyncratic style (I know! I'll parse this using Perl and a banana!) or it might simply be intrinsically complex. Either way it's going to cause problems because new developers will have a tremendously difficult time learning how to interact with the code (What? I need a banana?).
If a module is simple and easy to understand but doesn't address edge cases in the design space we might get a graph that looks like this:
This code is more dangerous because developers just coming to the code feel that it should be easy to change and modify. However, anyone who's spent time with code will realize that it's a pain to make any changes work.
Controversial code would look like this:
There are many potential reasons why this could happen. All of them are worth investigating.
The following is code that everyone is afraid to touch. I call it "Haunted House Code":
...because no one goes in there. Most likely all the developers who wrote this code have moved on. Graphs like this imply estimates will be random numbers and that much work will be done before the real difficulty of the task emerges.
So, in conclusion, I believe that while code metrics are very useful I don't believe they can give a completely accurate story. I think that simply asking the developers what they think of the code quality is a valid metric. They are using it every day, after all. They are the most qualified people to give an assessment. It's important to know where the crap is buried because it is these pieces of code that will give you problems when you try to add new features. Software development is a minefield, think of these graphs is a mine detector.
Subscribe to:
Posts (Atom)