The State of Enterprise Software.

Top companies invest heavily in people, technology and leaders that are passionate about transforming their applications to improve the quality and responsiveness of their business.

What is the point of technology if it does not make life simpler, the journey smoother and faster for delivering services to customers in a reliable way? How do the best companies overcome the challenges and consistently deliver smarter, faster and better solutions?

We’ve all been there! Joining a project that is facing the daunting reality of missing it’s key objectives of budget, timescale and having to deliver some bad news on quality. Great companies hire great teams who are empowered to proactively at catch problems early and act quickly to nip issues before they fester and become an annoyance.

The best teams can easily take a project from concept through to completion and achieve a genuine sense of fulfilling a promise to deliver results quickly, under budget and exceeding customer expectations. Handing over a project and witnessing a rapid adoption by customers is without doubt the final seal of validation. It’s an amazing feeling for all concerned!

The challenge with many large organisations is that the relationships between stakeholders and senior leadership are complex and span across geographies / departments. Also, the scale of investment required to deliver transformational change is often so huge that the governance and approvals process becomes a significant burden to progress. The need to constantly justify and seek approval is counter-productive and has a paralysing effect.

An effective agile team is one where every person pulls their weight to make life easier for their colleagues. Once a strategy and approach has been agreed, the project team should communicate freely and easily to maintain traceability of requirements and design / build / test artifacts. Losing traceability is the equivalent of losing control of the steering wheel of your car whilst driving at 70mph. Not good!

A typical project will have multiple development teams with several developers assigned to build key objects and components. The compounding effect on the risk and scale of mismatches that will surface during the integration phase of major sub-components increases dramatically. The later we identify and attempt to resolve issues during the project lifecycle the greater the impact on schedule, cost and quality.

Contact me if you have a requirement for an experienced technical delivery manager to help you stem the tide of challenges and issues your technology projects are facing. Project management is not simply about managing lists and raid logs. An experienced hands on technical delivery manager will engender trust, set out clear objectives and responsibilities across your technology and business teams. Adopting a traceable project methodology will help your business stakeholders deliver an exceptional user experience for your customers.

Website Simplification

    Keeping it simple

Having a personal website is all the rage. Given the long list of options where does one start. As a technologist I’m always trying out new ideas and prototyping solutions. So let’s take a look at how I arrived at the current Jekyll based solution you are currently reading.

Requirements

The requirements:

  • Must be simple!
  • Free hosting with a custom domain name
  • Simple content editor that supports images
  • GIT-based release updates

Remarkably, this project did not require a business case, funding, the availability of a large group of people or any sponsorship from a senior executive. I managed to do it all by myself. Apart from this little writeup, all of the requirements, design and planning is maintained in that very efficient repository known as “my brain”.

Keeping It Simple

We all know what Simple is when we meet it. It’s a rare experience in the corporate world. But when all the decisions are with one person, it is pretty trivial.

To appreciate Simple we need to have met the close cousin Difficult. Previous experience of implementing Content Management Systems (CMS) such as Drupal and Typo is verging on an enterprise grade platform. The hosting and config of these open source beasts will require rolled up sleeves and a few iterations to get it into shape. Achievable but not Simple!

The Simple solution that I am using to braodcast this content is courtesy of Jekyll. A very simple and lightweight framework that makes it very easy to configure and deliver content.

The Jekyll setup is well documented. The ideas are captured in a file using a simple Markdown editor such as MacDown. Once setup on your local machine the next big step is Hosting!

Free Hosting

The simplest way to host a Jekyll website is to create a free GitHub account and upload the full website as a Git repository. With a little tweaking, this will magically dispatch your content to users as website via a dedicated url eg yourwebsite.github.io. The GitHub platform is a native user of the Jekyll framework and there are simple instructions available for serving your web content to users.

The hosting option I have used is based on the excellent and free PaaS offered by Redhat Openshift. Once configured with the Jekyll gears the hosted solution is accessible via a dedicated url which can be aliased to a custom domain by setting up a DNS entry with a CNAME mapping. A couple of mouse clicks and follow the simple instructions and a robust hosted website will be up and running.

Content Editor

Let’s face it, the whole raison d’etre for having a website is to publish content. Content creation for your Jekyll website must be written in Markdown syntax and there are so many great editors available that will make writing posts fun. There is a great review of Markdown and available editors. However, in simple terms each post will be contained within a single file. The filename of this post is 2017-03-29-website-simplification.md. The naming convention is automatically used to sort your posts in date order. I use MacDown to write up my posts and store the files in the _posts folder.

The Markup syntax allows images to be easily incorporated into posts. A good way to manage images is to keep them in a separate folder to your post files. The directory structure might look like this:

posts folder:
  	2017-03-29-website-simplification.md
  	...
  		
images folder:
  	2017-03-29-website-simplification-01.png
	2017-03-29-website-simplification-02.jpg
	...

You get the idea. This makes it easy to pick out the images associated with any post.

GIT Based Release

GIT is perhaps a little too technical for casual users who do not have a developer background. But it’s relatively simple but extremely powerful tool for managing changes to documents. There are plenty of explanations available if you want tio find out more. There are command line versions of git available which allow you to manage the version history of your files. These are a great fallback option if you are stuck.

But a free version of Gitkraken makes life so much easier to provide a GUI based secure front end to GIT. Tracking changes and updates to your website becomes simple and fun.

The full flow of content from idea to served web content is orchestrated with a good mix of tools that simplify the process and the best part, it costs nothing!

Summary

You may be thinking, why bother with all this? Why not just post the content on to a blogger site? Well if you are the sort of person that yearns for a microwave meal, then that is certainly an option. This solution is not quite a meal at the Savoy, but it’s a pretty decent Spag Boll meal that you have built with some decent labour. Enjoy!!

Property Development

Building a house is a lot harder than writing software?!

A Home is without doubt one of those important things that effects our lives. We will all own or rent one at some point in our lives. Housing is everywhere. Very similar to software, it’s an integral part of what we do. I created an Archimate model of the business functions and services that must be managed during a major property build project.

Our Homes are also a major investment for us where we tie up huge amounts of capital and wealth. So, it’s not surprising that TV programmes on all aspects of property are so popular.

Buying a holday home or renovating a second property to rent out present enticing investment ideas for people. A 60 minute episode of Grand Design will get us excited and reaching for the power tools. Knock down a wall, build an extension and refresh the kitchen! Property development looks easy and much more enjoyable than the 9-5 in the office?

Well that’s kind of true! The reality is never that simple. The horror stories surface very quickly and the pitfalls are endless. Enthusiasm and money will take you so far. Take a look at the business model above and you will observe the multitude of roles and interactions that must be managed to the tightest of schedules and payroll funded from your savings!

My personal experience was an ambitious one out of necessity. No doubt about it, the end result was amazing, however, it was  very painful and traumatic as the complexity, expense and people challenges during the renovation unravelled and tested me well beyond my limits. The unpredictable and frequent issues that surfaced will test you physically, mentally and emotionally. The compounding effect of pressure due to time, budget and quality is uncompromising. Whether you play a small management and financing role or a more hands on involvement, the pressures will be severe.

For me, the timing of this renovation was shortly after a very challenging consultancy engagement that completed and I was looking forward to some relaxation with a property renovation project for our new home. Due to personal circumstances I set myself an unbelievably difficult challenge: to complete the entirety of this personal project in 2 months from start to finish.

Rear shots taken when we viewed the house.

Rear view after completion.

We had listed our current home for sale and this had completed within days of us putting in the offer and purchasing our new home (see above). Our buyers had insisted on an aggressive entry date which meant that we had 2 months. Two months – to complete the purchase of the new house, submit designs and plans for council planning and building warrant approval, engage contractors, demolish / remove and rebuild. The property had to be habitable for a young family and our furniture in 2 months! I simply could not afford the more relaxed and expensive option of renting for 6 months and completing the work in a sane timeframe. Cost!!

The business model depicted in the diagram above came to life with a vengeance and I was the fodder fueling the monster.

During this time I made some new friends – me and my orange cement mixer!

With the benefit of hindsight, I would have never started such a project. However, being enthusiastic and reasonably pragmatic, I rolled up my sleeves and jumped in. How wrong I was! This was tough. Not a sprint or a marathon. This was an ultra marathon with shackles tied to your ankles. This was the kind of character building that you wish on others.

I worked with some great contractors and some not so great. With perseverance and blood, sweat and tears – the results were amazing. We moved in after 2 months into two working rooms and spent a final month finishing off a significant list of work items. Have a look at the before and after pictures above. Would I do it again? Yes, but I would not put myself under such time pressure and the very real risk of ending up homeless if the project did not complete on time.

For me at least, I see that IT projects and programmes are equally complex, expensive and face huge time pressures. Success in these projects requires that all important ingredient – putting skin in the game! It is vitally important to work with the very best people who care about the end result and will help to navigate that pitfall ridden journey with you.

Content is King!

Governments across the globe are keen to get citizens to use transactional digital services. Making Digital Government Work for Everyone is a clear and simple strategy that can deliver huge benefits to citizens and governments. It’s not fun when you get it wrong though!

The GDS GOV.UK Verify service has been getting some heat recently as they announced the public beta. Some farmers reacted angrily at being shut out of the new online service for Common Agricultural Policy (CAP) applications and payments as they were unable to certify their identity using the government identity assurance scheme. The opposition Labour Party weighed in calling on the government to urgently address the issues with “Verify”.

Identity Assurance is a core digital building block that enables public digital services. However, it is a complex and emotional minefield that must be managed carefully to protect public concerns on privacy. I was fortunate enough to have implemented the myaccount service in Scotland which was adopted by City of Edinburgh Council. What is more the service transformation was implemented in 6 months once the OJEU procurement process was completed.

The Scottish Government myaccount service transformed and modernised an outdated system whose golden data sources are owned and managed by the 32 Scottish local authorities. All councils know their citizen details because they must provide services to each and every household and individual throughout their lives whilst they remain within their jurisdiction.

The GDS strategy for identity assurance adopted a different approach where commercial identity providers were contracted to verify and assign trust to the identity of an individual. The accuracy of any identity match will depend on the quality of data and the information that a citizen enters during registration. The easier we make it to narrow down the name and address of a citizen, the simpler the process. Establishing trust is another matter!

The myaccount service establishes trust by linking a citizen name (as held by the health records) and citizen address (as held by the OSG) and then establishing a known fact or secret that only an authentic user can provide which must match to a trusted golden data source of transaction related data held by a local authority. For example, a citizen may need to provide the myaccount service with a council tax reference number that can be linked to their name and registered address to assign an online and real-time level of assurance. The service has an impressive accuracy for getting a match at first attempt. Where exceptions occur, a manual overnight batch process kicks in to investigate multiple matches and create a myccount.

These technical details aside. The most critical measure by which any public service will be measured, especially an identity assurance service like Verify or myaccount, is going to be based on the spectrum of services that a citizen can access once they have gone through the process of registering for an online identity and provided the necessary proofs. The more the services, the less will be the focus on the few users who could not be registered or make use of the service.

Digital services are driven by the dream to deliver faster, better and smarter. Unfortunately, most public sector organisations and government departments that run such projects tend to work at a very slow pace. Every digital project that is live within government adopts agile, scrum and whiteboards are plentiful. However, the timescales for delivery has time horizons of 5 years. Agile!!?

My own experience is that the public sector is managed and driven by people who have good intent. Sadly, the public sector and government in general has a very poor track record of managing innovative IT initiatives. Keeping it simple and following a truly agile approach to delivery can lead to results faster but the strategy has to be driven by content first! Health, Education, Council Services, etc. How can the citizen access new services easily, efficiently and with reduced delays.

The GDS team are making great inroads and will succeed as the service uptake gathers pace. Innovation and change takes time to bed down. Good things will eventually follow. I’m an optimist!

Trust, Confidence & Security.

We trust people we recognise. Is it over-rated? The online world is complex and it is often very difficult to trust the authenticity of the websites that we visit or indeed of the people that are the visitors to a site. The myaccount signin service was developed in response to the Scottish Government digital strategy and has successfully been adopted by several key Scottish public sector organisations including The City of Edinburgh Council. The myaccount service is a truely unique and innovative shared security and trust platform that has been designed to run in a single instance of a cloud based infrastructure to support ALL public sector services in Scotland and is freely available to anyone and everyone. The service is designed to comply with the Scottish Government’s Identity Management and Privacy Principles. The project was initiated and delivered in record time and achieved it’s business case of delivering a managed service through a world class partner using opens standards platforms and a mix of local and international partners.

Please contact me if you have a requirement for a consistent and trusted digital service where citizens or customers can be sure that the services that they are accessing are bona fide websites and not the latest phishing scam to entice innocent people to share confidential details or financial information to unscrupulous cyber criminals.

Think You Know What You Want? Think Again!

Ever wondered why so many projects over run the schedule, under deliver on the expectations of the business and cost much more than was originally budgeted? The list of explanations for such disaster is a subject for another post.

The complexity of a project and the technical challenges cannot be underestimated. However, a large proportion of contributory factors is very tightly linked to the lack of discipline and rigour associated with the management and traceability of requirements. There is an important discipline that many quality delivery organisations follow that we can call the Application Delivery Methodology (ADM).

Every major corporation has a project management methodology and lifecycle. Unfortunately, very few people understand it and more importantly very rarely does the method assist in simplifying or improving the quality of project deliverables.

The purpose of our ADM is to standardise on a common set of artefacts for all business critical projects across the enterprise. In this article I investigate the importance of the tools required to support and manage the analysis and design phase of projects by managing the creation, maintenance and traceability of the ADM project artefacts and their inter-dependencies.

Traceability

The ADM has been created to primarily follow the waterfall design approach and more importantly the logical process model for the analysis and design phase of a project as depicted in the diagram below. It is important to note that the definition and alignment of test related artefacts is considered to be critical and within the scope of analysis and design phase of the project lifecycle.

 

The diagram above identifies the core project artefacts that will be produced to drive the enterprise towards the creation and delivery of a high quality software solution. In general, the IT department will adopt the role of overall design authority for the technical solution and will work with in-house resources, external suppliers and third parties to deliver a robust, scalable, extensible and integrated quality solution on a fixed price basis. The approach is equally applicable to bespoke build, COTS integration or change enhancement to an existing system. Furthermore, the rigour of the proposed approach is equally applicable to application development and infrastructure engagements.

The diagram above identifies the standard set of artefacts (see boxes) and the linkage between these products (see lines). The left hand column depicts the requirements specification artefacts (e. g. BRS, FRS, and DDS) that will be produced by (i) the business analyst team, and (ii) the design team. The right hand column identifies the associated validation criteria that are aligned with the requirements specification artefacts by the test team.

At a logical level, the diagram above depicts the inter-dependencies between artefacts. It should be immediately clear from this diagram why IT projects are so inherently complex to manage as the inter-dependencies and hence impacts of decisions within any given artefact can have a profound effect vertically and horizontally. Given that each artefact is targeted to capture the requirements and acceptance criteria of a unique set of users, it is clear that any mis-interpretation or gaps in the artefact will lead to a mismatch in expectations. Experience suggests that where this mismatch is significant, the project will identify defects and incur costs to realign the solution to the original business requirements.

ADM and Traceability

The ADM defines the standard templates for artefacts and more importantly the rules for capturing requirements and specifications in a pragmatic, unambiguous and rigorous manner which is readily understandable by the target audience and easy to interpret for test and verification purposes. However, it is absolutely key that the selected tool provides a simple way for managing traceability and linkage across artefacts and even within artefacts themselves.

Why is this important? The purpose of any IT engagement is to deliver a costed solution to the business requirements and achieve a defect free sign off to the User Acceptance Criteria. However, in order to achieve this, the analysts, designers and testers must work collaboratively to increasingly finer level of detail to decompose the problem statement into a functional/logical definition of the solution and ultimately into a detailed/physical representation that can be implemented.

The traceability and linkage across (i) the conceptual business view, (ii) the functional logical view and (iii) the detailed physical view is very complex to manage and hence appropriate tooling and governance must be put in place to make this a practical proposition.

For example, at a high level we require (i) vertical traceability from the BRS to/from the FRS, and similarly (ii) horizontal traceability from the BRS to/from the User Acceptance Testing (UAT).

The complexity of the analysis and design phase and the approach to decomposition is depicted in the diagram below which highlights the extent of documentation and linkage that will occur within even the simplest of projects.

 

This diagram is simply showing another view of the relationships and dependencies described earlier. For example, the horizontal relationship between the BRS and UAT (BRS – UAT) is depicted at the top part of the document: from the BRS, one is able to derive a number of User Acceptance Tests; each user acceptance test can be linked to a requirement in the BRS. Also shown in the diagram vertical linkage between types of testing UAT-System Test-Integration Test-Unit Test. The linkages highlighted in red show intra-domain linkages where there are horizontal inter-dependencies across FRS n and FRS n+2. Similar relationships and hierarchies will exist in the testing domain.

Many organisations when faced with the prospect of managing this number of artefacts will often raise the white flag of surrender and decide to pursue a RAD based iterative approach. This approach is commonly unacceptable within most corporations and experience has taught us that this approach is a false economy and abdicates the responsibility of this analysis to the developers whose decisions and interpretations are made in isolation during code construction and under extreme time pressures to deliver software to a tight schedule. This will almost always result in increased cost and delays to the final solution delivery. As the complex network of dependencies between analysis / design requirements and test is non-trivial, any attempt at manually performing this task becomes unmanageable without the use of appropriate tooling. The end result of such an approach is inevitably increased risk and exposure to stringent rules and regulations.

Whilst the ADM may appear to be forcing a waterfall approach to application delivery, in actual fact it is highly adaptable to an iterative and agile delivery model. As the methodology is rolled out and experience is gained in scoping and decomposing business requirements into the identified project artefacts, projects can be delivered iteratively and in a more agile manner as a number of thin inter-related tuplets (BRS, FRS, DDS) that form the basis of an independent work stream.

Summary Views

The selected tool must be able to easily generate a matrix view of requirements such that a detailed conceptual, logical and physical view of requirements can be traced horizontally and vertically allowing a collaborative team comprising of business users, business analysts, project managers, designers, developers and testers to get a consistent view of the inter-dependencies of requirements across the project. This is a very powerful feature that provides a framework for decomposing multiple views that spans organisational boundaries.

The generation of summary views must be derived and maintained from a consistent master repository which is professionally managed for configuration and change control as a group wide asset in much the same way as any productionised business application.

Business Requirements Traceability

 

Any tool must be easily configurable to apply appropriate business rules and a hierarchical labelling structure to all information contained within the repository. Specifically, the generation of unique identifiers to Business Requirements is absolutely key as this will be the primary key upon which all downstream requirements will be cross referenced. From the table above, we can see how a physical representation of the desired traceability can actually be displayed. This model allows for linkage of the BRS to inter-related BRS Identifiers and derived FRS and DDS. Furthermore, the linkage to testing is captured at the UAT.

It is highly likely that our standard tool for capturing and managing test requirements will be based on a product such as the Mercury Test Director suite. Hence, any tool that we identify must be able to support bi-lateral integration.

We would expect the tool to provide this view as an active navigational aid to browse to the desired level of detail and with appropriate drill down as required.

Functional Requirements Traceability

The FRS view is identical to the BRS view, however, the context is from the perspective of the Designer and hence the FRS is the starting point and associated System Test.

Detailed Design Traceability

The DDS view is identical to the BRS and FRS views, however, the context is from the perspective of the Designer/Developer and hence the DDS is the starting point and associated Integration Test and Unit Test.

ADM Tooling

The following UML diagram is a meta model of the information and concepts described above. As the meta model shows, there are many 1-to-many relationships between the various artefacts and we will be looking for tools that seamlessly manage these complex associations between the many project artefacts.

Until we have selected a tool, we will assume that:

  • All requirements will be captured in Word and imported into the tool
  • All requirements will be uniquely identified and referenceable through an auto generated tag or a “link”
  • All requirements will be linked to an associated requirement and the linkage can be easily navigated up or down
  • Using the method, the following documents will be generated and be fully traceable:
  • Business Requirements Specifications
  • Functional Requirements Specifications
  • Detailed Design Specifications
  • User Acceptance Tests
  • System Tests
  • Unit Tests

Traceability (vertical and horizontal) amongst these deliverables will be mandatory. The tool must provide an easy and robust mechanism for managing the associations and the tool must:

  • Provide impact analysis reports e. g. to e able to quickly determine the impact of a requirement change
  • Warn about and manage deletion of artefacts. Obviously, removing an artefact with dependant artefacts needs to be managed and handled correctly
  • Provide different views on the data e. g. tree view, where starting from a requirement say, one can “drill” down into the associated artefacts such as analysis, tests (UAT, system test, unit tests) and
  • Provide coverage reports i. e. have all expected artefacts for a requirement been produced and their status (e. g. complete, signed off, work in progress, …). The tool must highlight outstanding artefacts.

Summary

The successful rollout of the ADM requires appropriate tooling to support the easy and intuitive approach that we have defined. Specifically, the use of standard templates and the management, navigation and traceability of those artefacts across collaborating teams and organisations is considered to be very important. Whilst the identified tool must have a rich set of features, it is critical that it easily and flexibly handles traceability as described in this document and provide natural integration with the Mercury Test suite of applications.

Traceability ensures completeness, that all lower level requirements come from higher level requirements, and that all higher level requirements are allocated to lower level requirements. Traceability is also used to manage change and provides the basis for test planning.