Software As A Service: A New Corporate Trend

software development in progress

A new trend is sweeping the software industry — providing software as a service. Traditionally software has been stuck in the gray area between a wholly owned or licensed consumer commodity. That is, either the consumer owns the software itself or has a license to use it depending on who you ask and interpretations by the courts.

Regardless of whether software was owned or licensed the bottom line was that you paid for the it once and were able to use it perpetually. Some companies such as Red Hat provide support service as a complement to their product. Subscription based product support has proven to be very profitable for Red Hat, and other companies have taken notice.

I Am Altering The Deal

Microsoft and Adobe are two major software powerhouses that are trying to do away with the software status quo. Both companies have decided to change the way they sell their flagship products. Instead of the traditional pay once, use perpetually model they are now offering software as a service: pay monthly if you want to keep using the product.

SaaS is not a novel concept. Oracle and Salesforce have been doing it in the enterprise space, and massively multiplayer online games have been subscription based since the beginning. The model makes sense when the vendor is hosting the software and providing a benefit to the customer such as managing massive server farms and software maintenance on the customer’s behalf.

For a product like Microsoft Office or Adobe Photoshop the reasons for a subscription based service are less clear. MS Office 365 does offer some benefits for enterprise users such as moving Exchange and SharePoint servers to Microsoft’s cloud. However for home users the only things Microsoft offers with the subscription are 20 GB of storage for SkyDrive and 60 minutes of Skype phone call time per month. Services that are completely unrelated to Office apps.

Adobe is touting the main benefit of their subscription model as access to all their software. However, since most users don’t need the entire suite of Adobe’s software, a perpetual license for a single application like Photoshop would be a better investment (one year of Creative Cloud subscription is equivalent to a perpetual Photoshop license). This is especially true considering that both MS Office and Adobe Photoshop are mature applications which users don’t need to upgrade very frequently.

User + Subscription = Guaranteed Revenue

If you are starting to think that software as a service is an evil scheme for milking more money from consumers you’re partially right. Partially right because the reason for implementing this model is not primarily to make more money. Microsoft and Adobe could easily achieve that by raising prices on their products. No, the main reason these corporate giants are moving to the subscription model is the lure of guaranteed revenue.

With a traditional business model, both companies would release a new version of their software every couple of years. They would get a surge in upgrade sales, and then the sales would level off. The revenue curve would then look like a sine wave. Up, down, up, down. There’s nothing that Wall Street hates more than unpredictable revenue projections, and this in turn reflects on the company’s stock price.

By going with software as a service, the company is able to literally guarantee how much revenue they will bring in every quarter. They can tell Wall Street “we have X subscribers that are paying Y dollars monthly so we will have Z revenue this quarter”. Compare this to “we are going to release a new version of Creative Suite next quarter and we expect to generate X dollars of revenue, but we can’t know for sure how much it will be”. Which sounds better to you as an investor?

The Silver Lining

Despite the fact that there are currently few benefits to consumers in buying software as a service, things may change. Adobe for instance can greatly increase the value of its subscription by going further with its cloud than mere remote storage. One of the benefits of a server cloud is the vast processing power it provides. Adobe’s cloud can allow users to batch process a large number of high resolution images in a fraction of a time it would take on their home computer.

Will Adobe and Microsoft come up with real benefits to offer users purchasing software as a service? It’s hard to say, but the possibility is definitely there. Regardless of whether we will see software giants get creative in how they use the cloud in conjunction with their software, it is clear that the subscription model is here to stay. Companies stand to gain many benefits from this business model. Too many for them to ignore it or go back to the way things were.

This post includes Cloud by James Cridland used under the Creative Commons Attribution 2.0 Generic license.

Transitioning From Employee To Entrepreneur

software development in progress

There is clearly a major difference between being an entrepreneur and an employee. An employee has a permanent job (as permanent as it gets these days, anyway) and gets a paycheck perhaps weekly or bi-weekly. An entrepreneur typically creates a business that (hopefully) brings in money, but making money is not guaranteed.

There is also a third category that doesn’t quite fit into the label of employee or entrepreneur: the freelancer. A freelancer is a Rōnin — an employee without a permanent employer. Freelancers sometimes make the transition to entrepreneur, but even if they don’t they tend to be better off than employees since they diversify their employer base. When you do regular small jobs for many employers, losing any single employer won’t destroy your entire income stream.

Why be An Employee?

The concept of being an employee is fairly recent. It wasn’t until the industrial revolution and enormous urban growth that employment gained massive popularity. Despite being being new it is now completely pervasive in western society.

In the United States employment is so ingrained that most children from a young age are expected to go out and get a job once they are old enough. Entrepreneurship is considered risky and is discouraged or not even mentioned as an option. The question is why does our society value employment so much more than entrepreneurship?

In the early days of the employee growth explosion a job actually was secure. Once hired a person could expect to work for a single employer for 40 years and then receive a pension in retirement. However despite seeing both job stability and pension evaporate in the modern age, many people still associate being an employee with safety and security.

A False Sense Of Security

Although it may feel secure, having a job gives a false feeling of safety since you can be terminated at any time. The only way your job can be secure is if you have a contract of guaranteed employment. Such a contract is highly unlikely to be made however, since no laws require it and the fact that it would be a liability for the employer.

Most jobs in the US are “at will” employment — meaning you can get canned at any time with or without cause. As an employee, the sword of Damocles is ever-present even if you are not aware of it. The only exception to this rule are government employees which generally can stay at their job for life. If being an employee is so insecure, why do so few people start their own business?

It is true that being an entrepreneur is risky, but there is more than this perceived risk that drives people away. It begins with the fact that most of us start our careers as employees. Since our schools don’t exactly teach children that they should go out and start a business, most will go on to get jobs. Unfortunately being an employee is not fulfilling for most people, and at least some of them should become freelancers or start their own business.

Failure Is Not An Option

Some time ago I read that the longer you are an employee, the harder it becomes to start your own business. Back then I thought this was because you get so used to being an employee it becomes hard to adjust to something new. There’s more to it than this however. If you’ve been in the employee world for a while (5-10+ years) you are most likely very, very good at what you do. Plus you probably also have a fancy title like “Regional Manager” or “VP of Marketing” or “Director of Product Development”.

Leaving the employee world means leaving that title and peer recognition behind. It also means starting from scratch in many ways. You might be a top notch Java programmer but that will probably cover 20% of the things you need to be good at as an entrepreneur. Going from a high paying job that you are really good at to making no money at all and starting new skills from zero is not an easy transition to deal with.

To make things worse, American culture is very anti-failure. Failure is frowned upon to the point that failing feels like a crime of some sort. This means that most people are not particularly keen on putting themselves in a situation where they will most likely fail. We all go through failures when we start in anything, including our careers. The problem is that failure is pretty much only tolerated at that point, that first year or two of your career. After that period an employee is expected to not fail at their job.

Becoming an entrepreneur means accepting that you are going to rewind your career back to the beginning and fail. Probably a lot. Way more than you are used to by now, and way more than is culturally acceptable at this point in your life. It’s almost like dating a 20 year old college girl when you are a bald 40 year old man. People are not going to be very approving.

Taking The Leap

It is not a major surprise then that so few people commit to starting their own business. Between losing their status, the recognition of their peers, their high salary, and having to go through a multitude of assured failures, staying at a job seems like the saner thing to do.

Being an entrepreneur or a freelancer is not for everyone. Steve Jobs once remarked that to be an entrepreneur you have to really love what you do. It requires so much work and dedication that you’d be crazy to do it if you didn’t love it. In spite of this those of us who choose this path wouldn’t have it any other way.

This post includes The Choice by Luis Argerich used under the Creative Commons Attribution 2.0 Generic license.

The Conflicting Goals In Software Development

software development in progress

If you ask software developers what annoys them the most about their job a common answer will probably be “my manager”. Developers tend to assume that management doesn’t care about code quality or getting things done “right”. However a project manager is faced with an incredibly difficult balancing act: getting software done quickly and bug free.

When I was a developer, my sole purpose was to write code, test it, debug it, and make sure everything worked well. As I got into project management I saw first hand the conflicting goals of project managers and developers. Over the years the development community has come up with many tools and techniques to aid programmers in their quest. While they all make good sense in theory, they are best used sparingly in practice.

Code Reusability

Good software engineers tend to think far ahead. Rather than writing their code fast and half-assed they like to plan it out and organize it. An important rule of good software engineering is to modularize the code and ideally make it reusable down the line. Code reuse is a major tenet of software engineering, but it’s not always practical in the grand scheme of things.

It’s a nice sounding concept — write code once and use it in multiple projects. However the reality is that projects often get cancelled and requirements change many times during the course of development. What is usually more useful to the company is getting a project to a stage where it can be demoed to the customer.

Writing reusable code is often in conflict with getting things done fast. Developers will argue that writing reusable code will let them write future projects faster. However this a pretty weak argument since it’s only true when future projects are very closely related to the current project. In reality this is typically not the case. Thus focusing on code reuse typically adds no value to the company while adding to development time.

Unit Testing

Unit testing is a difficult balancing act to get right. On one hand it’s very useful because it provides developers with automated testing that, when done properly, can catch bugs right when they are introduced. This is particularly important because the later on in development a bug is found, the more costly it is to fix.

The major downside of unit testing is that it takes a significant initial investment of time, and continuous maintenance afterwards. While hardcore developers will probably want to write unit tests as soon as they start a new project, this is not a good idea. The amount of subsequent maintenance required is particularly significant in young, rapidly evolving projects. As the project code goes through major changes the related unit tests must be updated to keep working, which adds up to a lot of wasted time.

The biggest benefit of unit testing is that it is essentially regression testing. Small code changes in a large codebase can break unforeseen things that may not be noticed until much later. Thus unit tests are most beneficial in a large established project. A problem with waiting to write unit tests is that for a large project this is a significant initial time investment. However since the code architecture is locked down this is counterbalanced by negligible maintenance of the unit tests after they are written.

Architecture Design

Just as good software engineers think ahead with code reuse, they also plan ahead for future expansion of the codebase. This means spending a lot of time thinking about the architecture design of the code. The goal is to design a system that can be easily expanded to add new functionality without having to break a lot of existing code in the process.

The intention here is good, but the problem is that there is no guarantee the project will even be finished. The reality of the software business is that projects frequently get cancelled or put on hold, and in the early stages the most important thing is to create a usable demo. Until a demo or prototype is available the fate of the project is probably very uncertain.

If a project does end up being taken all the way to completion a good architecture is important. This is still possible even if the initial development stage didn’t concern itself with good architecture design. The codebase can simply be refactored in parallel with development of new functionality, and unit tests can be added at this point as well.

What’s a Developer To Do?

It is unfortunate that developers tend to be isolated from the rest of the company where they work. I was personally guilty of this as well. It wasn’t until I became a product manager that I saw how little of the whole process the developers get to see.

The pure academic methods that many engineers strive for are not a prudent way to write software in the real world. Since it’s probably easier for technical people to understand non-technical things than the other way around, developers should make an effort to understand the ecosystem in which they operate. By doing this they can balance their goals with the goals of their company at large.

This post includes code.close() by Ruiwen Chua used under the Creative Commons Attribution-ShareAlike 2.0 Generic license.