Transitioning From Employee To Entrepreneur

software development in progress

There is clearly a major difference between being an entrepreneur and an employee. An employee has a permanent job (as permanent as it gets these days, anyway) and gets a paycheck perhaps weekly or bi-weekly. An entrepreneur typically creates a business that (hopefully) brings in money, but making money is not guaranteed.

There is also a third category that doesn’t quite fit into the label of employee or entrepreneur: the freelancer. A freelancer is a Rōnin — an employee without a permanent employer. Freelancers sometimes make the transition to entrepreneur, but even if they don’t they tend to be better off than employees since they diversify their employer base. When you do regular small jobs for many employers, losing any single employer won’t destroy your entire income stream.

Why be An Employee?

The concept of being an employee is fairly recent. It wasn’t until the industrial revolution and enormous urban growth that employment gained massive popularity. Despite being being new it is now completely pervasive in western society.

In the United States employment is so ingrained that most children from a young age are expected to go out and get a job once they are old enough. Entrepreneurship is considered risky and is discouraged or not even mentioned as an option. The question is why does our society value employment so much more than entrepreneurship?

In the early days of the employee growth explosion a job actually was secure. Once hired a person could expect to work for a single employer for 40 years and then receive a pension in retirement. However despite seeing both job stability and pension evaporate in the modern age, many people still associate being an employee with safety and security.

A False Sense Of Security

Although it may feel secure, having a job gives a false feeling of safety since you can be terminated at any time. The only way your job can be secure is if you have a contract of guaranteed employment. Such a contract is highly unlikely to be made however, since no laws require it and the fact that it would be a liability for the employer.

Most jobs in the US are “at will” employment — meaning you can get canned at any time with or without cause. As an employee, the sword of Damocles is ever-present even if you are not aware of it. The only exception to this rule are government employees which generally can stay at their job for life. If being an employee is so insecure, why do so few people start their own business?

It is true that being an entrepreneur is risky, but there is more than this perceived risk that drives people away. It begins with the fact that most of us start our careers as employees. Since our schools don’t exactly teach children that they should go out and start a business, most will go on to get jobs. Unfortunately being an employee is not fulfilling for most people, and at least some of them should become freelancers or start their own business.

Failure Is Not An Option

Some time ago I read that the longer you are an employee, the harder it becomes to start your own business. Back then I thought this was because you get so used to being an employee it becomes hard to adjust to something new. There’s more to it than this however. If you’ve been in the employee world for a while (5-10+ years) you are most likely very, very good at what you do. Plus you probably also have a fancy title like “Regional Manager” or “VP of Marketing” or “Director of Product Development”.

Leaving the employee world means leaving that title and peer recognition behind. It also means starting from scratch in many ways. You might be a top notch Java programmer but that will probably cover 20% of the things you need to be good at as an entrepreneur. Going from a high paying job that you are really good at to making no money at all and starting new skills from zero is not an easy transition to deal with.

To make things worse, American culture is very anti-failure. Failure is frowned upon to the point that failing feels like a crime of some sort. This means that most people are not particularly keen on putting themselves in a situation where they will most likely fail. We all go through failures when we start in anything, including our careers. The problem is that failure is pretty much only tolerated at that point, that first year or two of your career. After that period an employee is expected to not fail at their job.

Becoming an entrepreneur means accepting that you are going to rewind your career back to the beginning and fail. Probably a lot. Way more than you are used to by now, and way more than is culturally acceptable at this point in your life. It’s almost like dating a 20 year old college girl when you are a bald 40 year old man. People are not going to be very approving.

Taking The Leap

It is not a major surprise then that so few people commit to starting their own business. Between losing their status, the recognition of their peers, their high salary, and having to go through a multitude of assured failures, staying at a job seems like the saner thing to do.

Being an entrepreneur or a freelancer is not for everyone. Steve Jobs once remarked that to be an entrepreneur you have to really love what you do. It requires so much work and dedication that you’d be crazy to do it if you didn’t love it. In spite of this those of us who choose this path wouldn’t have it any other way.

This post includes The Choice by Luis Argerich used under the Creative Commons Attribution 2.0 Generic license.

The Conflicting Goals In Software Development

software development in progress

If you ask software developers what annoys them the most about their job a common answer will probably be “my manager”. Developers tend to assume that management doesn’t care about code quality or getting things done “right”. However a project manager is faced with an incredibly difficult balancing act: getting software done quickly and bug free.

When I was a developer, my sole purpose was to write code, test it, debug it, and make sure everything worked well. As I got into project management I saw first hand the conflicting goals of project managers and developers. Over the years the development community has come up with many tools and techniques to aid programmers in their quest. While they all make good sense in theory, they are best used sparingly in practice.

Code Reusability

Good software engineers tend to think far ahead. Rather than writing their code fast and half-assed they like to plan it out and organize it. An important rule of good software engineering is to modularize the code and ideally make it reusable down the line. Code reuse is a major tenet of software engineering, but it’s not always practical in the grand scheme of things.

It’s a nice sounding concept — write code once and use it in multiple projects. However the reality is that projects often get cancelled and requirements change many times during the course of development. What is usually more useful to the company is getting a project to a stage where it can be demoed to the customer.

Writing reusable code is often in conflict with getting things done fast. Developers will argue that writing reusable code will let them write future projects faster. However this a pretty weak argument since it’s only true when future projects are very closely related to the current project. In reality this is typically not the case. Thus focusing on code reuse typically adds no value to the company while adding to development time.

Unit Testing

Unit testing is a difficult balancing act to get right. On one hand it’s very useful because it provides developers with automated testing that, when done properly, can catch bugs right when they are introduced. This is particularly important because the later on in development a bug is found, the more costly it is to fix.

The major downside of unit testing is that it takes a significant initial investment of time, and continuous maintenance afterwards. While hardcore developers will probably want to write unit tests as soon as they start a new project, this is not a good idea. The amount of subsequent maintenance required is particularly significant in young, rapidly evolving projects. As the project code goes through major changes the related unit tests must be updated to keep working, which adds up to a lot of wasted time.

The biggest benefit of unit testing is that it is essentially regression testing. Small code changes in a large codebase can break unforeseen things that may not be noticed until much later. Thus unit tests are most beneficial in a large established project. A problem with waiting to write unit tests is that for a large project this is a significant initial time investment. However since the code architecture is locked down this is counterbalanced by negligible maintenance of the unit tests after they are written.

Architecture Design

Just as good software engineers think ahead with code reuse, they also plan ahead for future expansion of the codebase. This means spending a lot of time thinking about the architecture design of the code. The goal is to design a system that can be easily expanded to add new functionality without having to break a lot of existing code in the process.

The intention here is good, but the problem is that there is no guarantee the project will even be finished. The reality of the software business is that projects frequently get cancelled or put on hold, and in the early stages the most important thing is to create a usable demo. Until a demo or prototype is available the fate of the project is probably very uncertain.

If a project does end up being taken all the way to completion a good architecture is important. This is still possible even if the initial development stage didn’t concern itself with good architecture design. The codebase can simply be refactored in parallel with development of new functionality, and unit tests can be added at this point as well.

What’s a Developer To Do?

It is unfortunate that developers tend to be isolated from the rest of the company where they work. I was personally guilty of this as well. It wasn’t until I became a product manager that I saw how little of the whole process the developers get to see.

The pure academic methods that many engineers strive for are not a prudent way to write software in the real world. Since it’s probably easier for technical people to understand non-technical things than the other way around, developers should make an effort to understand the ecosystem in which they operate. By doing this they can balance their goals with the goals of their company at large.

This post includes code.close() by Ruiwen Chua used under the Creative Commons Attribution-ShareAlike 2.0 Generic license.

Why In-App Purchases Erode App Store Value

app store on Apple iPhone

If you have a smartphone, be it an Android handset or iPhone, you generally get your apps from the provided mobile app store. In the early days when the iPhone was first introduced app prices stabilized in the lower single digits. The 99 cent price point became fairly standard, and most app sellers focused on the impulse buy strategy.

App Store Of The Future

Fast forward to today and the situation hasn’t changed dramatically. Most apps still sell for $1-3 dollars, but with one major difference. Developers today have access to in-app purchases as an option. The idea is to provide premium or additional features on top of the base app for an additional fee. The user can get the base app free or really cheap, and then buy more features as needed.

This sounds good in theory, but in practice it creates a very negative experience for the user. Whereas before if a user purchased an app he knew that he was getting a complete app, today with in-app purchasing he might me getting very little. It’s perfectly possible to shell out money and then find out that there’s a lot more purchasing that has to be done to get the functionality you expected.

Bait And Switch

If this sounds like a scam it pretty much is. For example, after buying a drawing app for the iPad you may find out that to get any real use out of the program requires buying additional packages. In a drawing app this may include having to pay for additional brush styles, effects, colors, and anything else you can think of. While you may have paid 99 cents to download the base app, getting all the functionality may end up costing you dozens more $2-3 purchases.

The situation is particularly bad with games, where developers have capitalized on the way Android and iOS operating systems work. Games specifically targeted at children are often marketed as a free download, only to later present the child with a prompt requesting to buy credits or coins or some other in-game currency.

Children typically don’t have a very good understanding of money, especially when a virtual transaction is taking place. As a result there have been many cases of children racking up thousands of dollars of in-app purchases. Unfortunately the default settings currently provided on both iOS and Android make this a widespread issue.

Think Of The Children

In Android by default there are no restrictions on any purchases, and thus a child can easily rack up an in-app purchase bill. However since parents are aware of this, they will probably not let their child play with their Android device. The default settings of iOS however, provide a false sense of security to parents.

In iOS, a purchase requires that the user enter a password. This is required to install both free and paid apps. What typically happens is a child asks his parent to approve a free game install. So far so good. However the password authorization remains in the system for 15 minutes by default. Thus a child can make in-app purchases for up to 15 minutes after downloading the game without entering a password.

It may be hard to believe that a child can rack up a few thousand dollars of in-app purchases in just 15 minutes. However this is made very feasible by the fact that in-app game purchases often go up to $100 for in-game currency bundles. This is clearly an amount that no adult in their right mind would be paying for a 99 cent app, and has triggered an investigation by the UK government.

Both Android and iOS do allow the user to disable in-app purchases, and iOS allows changing the default 15 minute password expiration limit to require the password immediately in lieu of disabling in-app purchases completely. Android also has a number of third party options to limit access. However neither OS comes locked down by default.

Value Erosion Zone

Kids will be kids, and with all the high profile cases of children spending lots of money in the app store both Google and Apple will probably be addressing the issue in the near future. Apple has already begun to clearly label apps that contain in-app purchases. The real problem with in-app purchases is that they severely dilute the value of the app itself.

Whether it’s actual value that is being diluted or whether it’s a purely psychological issue is not important. When I see an app that’s labeled as having in-app purchases I immediately skip it. This is because it’s simply not worth spending the time to figure out what the app offers out of the box and what I would have to buy via in-app purchases.

This is a major problem because as more and more apps adopt in-app purchases, the value of the app store as a whole is eroded. Unfortunately it’s a rather lucrative selling strategy, with in-app purchases making up as much as 76% of iPhone app store revenue in the US (and even more in Asia).

It remains to be seen what effect in-app purchasing will have in the long term. What is certain is that in most cases it creates a negative experience for users and hurts value of the app store itself. What was meant to be a beneficial feature that would allow users to unlock premium functionality was turned by app developers into a micropayment system within the app itself.

This post includes Home by Robert S. Donovan used under the Creative Commons Attribution 2.0 Generic license.