New Project Checklist

Introduction
 
Over the years (a lot of them!), I have been involved in projects of which some have succeeded tremendously well, some that have fallen flat on their face, and others that limped along, with peaks and troughs of equal parts of stress and plain sailing. I have found in general that when we have a "lessons learned" at the end of a project, and bring these lessons forward to new developments, our success rate climbs, and stress levels and the number of late nights spent fighting fires lowers. So this article is a light read, a collection of things to remember and think about when starting a project - things that may seem obvious, but deserve a place in your mental or project checklist. They are questions that if not asked, can lead to you backing yourself into a corner technically or operationally a while down the road when your project gets into production (or can't, because you didn't look far enough ahead...). This is only the tip of the iceberg and a starting point. I will add more topics as they comes to mind. I also look forward to you all adding comments that we can bring into the article!
 
Platforms and Operating Systems
 
This one may seem very obvious, but its has some gotchas. You may be targeting the Windows platform, but what versions do you support? .. depending on what you support may restrict your use of technology. For example, as of the date of this article Windows XP support by Microsoft has generally stopped (or at least is dying a very slow death), yet it still has a very high installed user-base. If you choose to support running your code on XP, be aware that it will not run any technologies that require .net version 4.5 for example. In addition to running on a particular platform, you also need to consider the platforms that are accessing your system as they may not "play nice" together and bring some unwanted pain.
 
Users and their data
 
Ahhh, users! those dreaded creatures that consume the fruits of our labour.... I carried out some research a number of years ago and came to the conclusion that most applications would benefit from being designed first for fringe users (such as those with low sight ability). Apart from ensuring you are compliant with the laws on accessibility, research has shown that the fundamentals of design that you need to do for the less abled helps focus design and make it cleaner and simpler to use for all users.
 
While we are talking about users, what kind of volume of them are you expecting, to start, in 12 months, in 48 months? This question should have a deep impact on any area of your system that requires a swift response, and involves storage data or quick throughput of data.
 
Data onboarding
 
There are few systems these days that don't involve interaction with other systems, and it is commonplace in "line of business" applications to migrate data from one application to another. When planning to bring users live therefore, we need to ask at what point/when, their historical data needs to be moved into the system, and how to get it in. This not only has implications for overall data storage, but also the efficiency of your data onboarding system itself. When you bring in data, are there dependencies that the data expects to be present? ... will a query fail to return data if this default expected data is not present? Another question to ask about data onboarding is the frequency of same. Is the data you are bringing in a once off job or something then needs to happen on a regular basis? ... if regular, then you need to consider how data coming in should be gathered, for example, do you need to put together a procedure for only onboarding incremental data.... questions, questions, questions....
 
Data volume
 
When systems are designed, often only a cursory nod is given to the scalability of the architecture and load testing. if you expect any kind of volume, you must put together a proper strategy to load test - not only the data, but also the querying of data. You may find that the beautifully uber normalised database you are so proud of is as slow as treacle running uphill due to too many joins and inner selects. You need to plan ahead and de-normalise or pre-stage data where necessary. While its a great accolade to the business that you encounter scaling problems, its a far better accolade to you that you designed and tested for it properly in the first place.
 
Data I/O
 
Onboarding data is great, but how many of us think of off-boarding? (no, its not something you do in a heavy snowfall....). under data protection and privacy legislation (in the EU and many other locations), it is a requirement that you must facilitate the export of any data relating to a user in a reasonably readable format. If you don't know what data you have on users, then its hard to export it. We should plan from the ground up therefore that when methods are being written to get data into the system, we have corresponding data out functionality as well. Remember, its easiest to do these things when they are designed, than latch them on afterwards when the system has grown out of control and the people who originally designed it and can answer questions are long gone.
 
Testing
 
When developers test their own work, that testing is based on what they themselves know and their tacit knowledge of how things work. For this reason amongst others, it is important to get non developers to test work. Sure, the developer should not release anything for test unless basic sanity checks have been carried out, but unless you are selling into the develop community it is critical that you get people with no preconceived ideas to walk through designs and prototypes with you.
 
The UI of course is incredibly important, but so too is data and its validation. When constructing sample data, you need to ask some questions: is the data relevant? .. there is no point in testing validation for a US post-code format if the application is only deploying in the UK where the format will be different. Is the data date-bound? ... if your home page for example shows a dashboard that contains a summary of the last thirty days sales figures, but your sample data was collected six months ago, you may be scratching your head as to why the dashboard widget is empty. This is even more important when you are chaining business logic together and comparing data over time with historic data dependencies. In cases like these, it is good practice to put together a sample data generating harness that can generate accurate sample data based off a central pivot date given as a parameter.
 
Going back to data volume for a minute, here is a simple rule of thumb. If data retrieval takes more than two seconds, then thats too long to be used as a live data retrieval process and you need to redesign.
 
Technologies
 
When deciding on what technologies to use in a solution, assuming of course you have evaluated its suitability, you also need to consider the quality of the technology, its maturity and also the organisation supplying the technology. If the tech is open source then you can examine the internals yourself and make sure its solid. In the case of open source, the core development team and surrounding community are the organisation. Find out how long the organisation is around, what their long term plans are for the technology, what dependencies the technology has and if it conflicts with other technology or constraints you have. This might be technology focused such as requires Twitter Bootstrap version 2.x, or could be a target user organisation constraint such as "no Java applets allowed...".
 
Reinventing the wheel
 
Many times we may be inclined to reinvent the wheel just because whats out there doesn't do *exactly* what we need. If you are under time pressure (who isn't!), or are working to a budget, you might be better improving someone else's wheel instead. In a web example this might mean borrowing someone else's CSS, using it as a base and tweaking it to meet your particular requirements. In a more programming orientated sense (what is code?! ... it gets quite blurred at times!), it may be noticing a pattern or method of doing something in one technology, and leveraging that to build something else one top. There are many occasions where I have taken multiple pieces of different technologies and plugged them all together to get a particular result - all the time re-using, re-shaping, and not re-inventing. On the other hand, if there's a genuine need for a better mouse-trap, then go build it if the business/use case justifies it!
 
Summary
 
While this is only the tip of the iceberg, the message is clear - don't jump in without thinking long and hard about technology implications, user requirements, scalability of system and data etc. Draw on the wisdom of your team and the domain knowledge of your customers. Learn from your mistakes, and bring forward lessons learned to each new project you approach.


Similar Articles