Trends & Transformation
T'N'T
  • Home
  • Trends & Transformation
  • Contact me
Cut the crap! The unintended benefits of Lean
16/04/2021
Transformation #LeanLeadership, #leanthinking
0

I stumbled across this presentation by Dave Hahn, senior SRE at Netflix. Dave provides an entertaining keynote speech on how DevOps works at Netflix. I highly recommend watching it, it’s about 20 minutes long. Though for those of you short on time I’m going to touch on some of my key takeaways in the post below.

Spoiler Alert! This post is not about Dev Ops. What this is about, is the application of Lean principles.

Throughout the presentation, Dave outlines what they ‘don’t do‘ at Netflix and what they ‘do focus on‘. It just so happens that this has resulted in the behaviours and benefits outlined in DevOps. What is striking to me, is that this talk provides great examples of the actual application of Lean principles within managment decision making and touches on some of the intended and unintended benefits. This includes, levels of innovation, focus on the customer as well as identifying what differentiates them as a service. So how does this all work?

Lean thinking at Netflix

Firstly, it is apparent that Netflix has a clear mission statement ‘to win those moments of truth‘. This is their north star from which they validate how they operate, hire, promote the desired managment behaviours as well as determine how and where they should invest their resources. It would appear that they apply a laser-like focus to ensure that all their resources are focused on achieving this.

An example of this includes building a purpose built CDM platform. Netflix used to buy it as a service but realised that this was an expensive overhead to which the vendor’s value drivers and overarching strategy was not aligned to theirs. Hence, they built their own. This enabled them to offer ISP’s free memory caches, deployed within the ISP network, reducing the ISP’s overhead on transit costs but more importantly ensuring that ‘more Netflix is nearer to the point of consumption’, ensuring that you and I have a greater customer experience.

Netflix are a data driven company – data is at the core of their decision making process, from identifying what shows to produce, selecting the ideal director to realise the intended vision along with what actors are most likely to capture the audiences attention. Though data is one of their core assets, they have realised that managing everything within the data ecosystem isn’t a valuable use of their resources. Hence, all of their 100s of microservices have been on the cloud since 2016. This is a clear example of identifying elements of the value chain which do not provide a clear relatable benefit, data centres in this instance, to the customer but are a necessity, something which Netflix classes as ‘undifferentiated heavy lifting’ and upon identifying it, outsourcing it.

Their first priority is keeping engineers doing fun, exciting, challenging stuff so that they maintain, if not increase, their ‘velocity of innovation‘. This means that they actively look to remove bureaucracy, as employing someone to dream up all the relevant guardrails whilst trying to promote a fast paced Agile organisation, is an oxymoron. Instead, they empower employees to both make and take responsibility for their decisions – for example all their engineers have access to the production environment. For anybody who is not technical, this is like an airline allowing all technical staff to make changes to the engines, whilst cruising at 40,000 feet.

To ensure that our jet doesn’t stall whilst mid air, Netflix has established a set of tools which automate testing and quality checks prior to pushing new features into production. This results in you and I enjoying the latest features to select, preview and help us decide what programme to watch next. Though by building these support tools to empower decision making, they’ve removed bureaucracy associated with release windows and the associated overhead of training engineers on new programming languages and toolsets, safeguarding against any potential errors or bugs. All of which would be time consuming and ultimately slowing the rate of innovation.

Observations

This, I assume, has evolved over time and reminds me of the adaptive organisation approach outlined in Eric Reis’s book, THE LEAN STARTUP. This involves applying the 5 why’s techniques, when an issue or fault has occurred, to identify the root cause and then dependent on the cost associated with the fault, proportionally invest in a fix. In Netflix’s case, this has resulted in an array of open source tools which makes employees jobs easier to launch their latest ideas, whilst mitigating the risk of bringing down the whole platform.

This does not mean that Netflix is a nirvana to work at. By their own admission they spend a considerable amount of time ensuring ‘cultural fit’ of applicants when hiring. Hence, you don’t just have to be one of the best or the brightest in your field, you must also be aligned with Netflix’s core values.

Yes, their service has experienced downtime, as is well documented on Twitter, but this has not been caused by disgruntled employees and due to tools like chaos monkey, when it does occur, they react quickly.

This calculated risk, is one that their customers are more than willing to overlook. I would say ‘forgive’, but this implies that the customer consciously recognises that they feel that they have been wronged and hence, emotionally have to engage in a process of forgiveness. As such, I would argue that the quality of the service and the content that they now themselves produce, meets their customers primary needs and desires. Affording them the ability to implicitly circumvent this emotion – no easy feat!

So what’s the takeaway?

A clear vision and mission statement, shackled to a management culture which is obsessive about the customer experience appears to be a consistent theme across the FAANG behemoths. Whilst drafting this post, I stumbled across the blog of Gergely Orosz who is an ex-Uber employee. He too provides similar insights in his article What Silicon Valley “Gets” about Software Engineers that Traditional Companies Do Not‘. This article highlights a number of parallels; empowering intelligent people, removing bureaucracy, encouraging transparency of both projects and that of the business’ performance.

I haven’t worked at a FAANG organisation, so like you, I am decoding what is shared in the public domain. And like all companies and anyone with a social media account, they will be guilty of promoting their idealised selves.

All organisations have failings and some of their priorities and values will not transcend every industry. For example, a healthcare service provider which prioritised velocity of innovation over everything else, would be incongruent with the Hippocratic oath. I’m sure that this hypothetical organisaiton would produce some interesting but clearly ethically questionable results.

I have worked now in a number of organisations, all of which have had different cultures and have been hell bent on finding the illusive ‘secret sauce’ to continually increase market share, revenue, growth and or improve margin. Unsurprisingly, there is a tendency to attempt to mimic core capabilities of a market leader in a separate sector.

A recent example, within the Project Managment community, are job postings requesting candidates have experience of Agile at scale akin to the Spotify Model. This bemuses me as, firstly, there is likely to be a small talent pool which actually has this experience and secondly, because it’s quite unlikely that those who do, will be wiling to work for an organisation which is trying to mimic this capabillity. Especially if they do not already demonstrate the necessary cultural foundations in order to make this a success.

That is not to say that we cannot learn from our peers in other domains. But when doing so we need to be aware of what trade offs they have made in order to achieve success along with their value streams and overall how are they configured. Subsequently, we need to be aware of the root cause of a desired capabillity, in this case it is a managment culture which either by design or chance champions Lean principles and just so happens to result in a great DevOps capabillity.

Final thoughts…

As a result of writing this, it has got me thinking about what I currently do on a regular basis and how I can apply Lean thinking to my daily practice.

I am guilty, maybe you are too, of having a varied and wide array of interests, which subsequently can cause me to become distracted, or I expel effort on ‘undifferentiated heavy lifting’ when trying to solve problems. I actually feel that in a world where there is an abundance of tools delivered via the cloud, enabling entrepreneurs and tinkerers to explore an abundance of possibilities as alluded to by Chris Anderson in the The Long Tail; everything that can be conceived will be.

Subsequently, in a world full of social media updates, emails, Instant Messages and an abundance of tools, capabilities and new concepts, the real skill or ‘secret sauce’, is balancing an openness to new ideas and education, whilst not allowing yourself to become distracted from your objective. This requires you to periodically review overarching goals and where possible remove or outsource waste. Personally if I manage to continuously do this I believe that I’ll achieve my intended benefits but as Netflix has found, I may also stumble across some unintended ones as well.

3 Laws
28/03/2021
Trends Digital Transformation, First principles, Kryder's Law, Moore's Law, Neilsen's Law, Trends
0

As spring 2021 has kicked in, it got me thinking about how underlying beliefs and assumptions shape our thinking and how even these need to be reviewed every once in a while. Hence, I thought I would undergo some ‘mental’ spring cleaning. I remember that whilst at university, sadly well over 10 years ago now, I had learnt about the 3 laws; Moore’s Law, Nielsen’s law and Kryder’s Law.

All three have been key to the underlying architecture which has spurred on the march of progression in the digital age and have definitely informed my thinking when considering potential digital products and services.

Rather than just revisit these and with awe, recite the orders of magnitude that have been achieved, I decided to apply first principle thinking. I did this in order to understand the limitations of where we are going, discover any blind spots in my own thinking and hopefully provide some inspiration of what to expect over the next five to ten years.

In doing so I I’ve shared my observations and thoughts below, to which I hope, at a minimum, provide some food for thought…

First Principles

Before we touch on the 3 laws, let’s recap as to what is first principle thinking. It is:

a basic proposition or assumption that cannot be deduced from any other proposition or assumption

https://en.wikipedia.org/wiki/First_principle

This approach originated from the Aristotelian school of philosophy and is essentially the application of a set of questions in order to determine the root cause analysis. A modern day Lean management approach to this, which you can apply quickly, is the 5 whys.

Although they are referred to as laws, it is only the laws of physics which are unbreakable. As humans we have a weakness to conflate patterns with laws and subsequently we can get a little ahead of ourselves when making predictions. By applying this approach I am going to try and mitigate, as much as possible, any fallacies in my own thinking.

Now that I’ve cleared that up, let’s focus in on the 3 laws. Starting with ‘the daddy’ of them all – at least in the tech world – Moore’s Law.

Moore’s Law

It was first identified by Gordon Moore, the co-founder of Intel, around 1970. He identified that the number of transistors on an affordable CPU would double every two years and that this trend would continue to do so. This doubling led to exponential growth in processing power, meaning that if you spent a $1 today, in roughly two years’ time that same $1 will give you a CPU which has twice as many transistors, resulting in statements that the chip is twice as fast.  

This doubling explains why cost barriers in the 1960’s resulted in computers only being owned by large multinational organisations, governments or universities, and why today computer chips are already embedded or are offered within devices as part of the Internet of Everything; smartphones, smart watches, cars, speakers, lights etc.

What are the limitations?

However, applying first principle thinking demonstrates that this trend is not going to last forever. The primary bottle neck is that we are unable to make transistors smaller than an atom. Presently, the smallest chip size is at 7 nanometers (nm) with plans to build 5 and 3nm chips in the next couple of years – 5nm is equivalent to a single haemoglobin molecule, with atoms, dependant on their size, being a fraction of 1nm.

As outlined in this article of MIT’s technology review, Jim Keller, who is Intel’s head of engineering, is bullish, arguing that there are more than a 100 variables which could keep the Law alive, from 3D architectures to new transistor designs or software developers being more efficient with the hardware that they are given.

The article goes on to elaborate how a research team led by Neil Thompson, an economist based at MIT’s AI and computer centre, were able to improve the computation time of a programme written in Python, which takes seven hours to execute, down to 0.41 seconds. This was achieved by switching to C and configuring the software to fully utilise the 18 cores on the CPU.

Keeping with the theme of first principles, you may be asking why does C outperform Python? In layman’s terms it’s down to how the computer interprets the code, turns it from text to binary and performs the necessary calculations to return a result. Python is dynamically typed, it’s easier for a programmer to write but is more computationally resource intensive. C is harder to write but is easier, computationally, to interpret. I concede, from a purest standpoint, that optimising the software on a CPU which hasn’t double in capacity, would provide benefits, though it wouldn’t strictly be in keeping of Moore’s original statement.

Subsequently, Thompson warns that this does also signal that the benefits of Moore’s law may no longer be enjoyed at a general level. Citing the explosion of interest around Deep Learning and AI applications which require GPU’s, due to their greater ability to handle parallel operations, as well as ASIC chips used for cryptocurrency mining, this further supports the argument that there will be a greater focus on specialisation of software and chip architectures towards solving specific buisness problems, wherever the most money resides.

This hardware conundrum has led researchers in the AI field to consider using analogue ‘black box’ solutions or a hybrid approach as a way of overcoming this hurdle.

Is it the end of the line?

In short, we may not yet be at the end of Moore’s law but taking into account that there are fewer manufactures of high end chips, that those producers are experiencing delays in industrialising new production methods, it would appear that we are coming to the maturity of the existing silicon based technology S curve.

Interestingly, Ray Kurzweil identified Moore’s Law as the fifth paradigm of computation which has provided exponential growth since 1890. Kurzweil tracked the progress in performance across mechanical calculating devices, Alan Turing’s relay-based ‘Robinson’ machines, vacuum tubes, transistor based machines and the integrated circuits, which have made Intel and Gordon Moore famous. The essay was written in 2001 and goes onto predict that Moore’s law will come to an end in 2019.

So, if Moore’s Law is part of a bigger trend, what are the candidates to continue this trend and provide a sixth paradigm?

Over the last couple of years there has been growing noise around developments in Quantum Computing, though for all the hype this is still in a R&D phase and even with a significant breakthrough is unlikely to be available for general adoption anytime soon as they will require new approaches to writing software and algorithms in order to exploit the benefits of quantum entanglement.

Graphene, the wonder material that was first isolated in 2004, has made progress in the last seventeen years with it featuring in sports clothing and equipment, barrier coatings which are used on the hulls of ships and in the world of tech, appears in the Huawei X10 phone as part of the cooling system. However, production on large scale to compete with silicon is still a way off and there are quality issues with existing methods, even when buying the same product from a company.

So rather than revolution should we be looking for evolution?

If so, the front runner to complement it and over time potentially replace it, are compound semiconductors. These can be made out of up to four other elements and have the potential to be 100 times faster than silicon, operate at a lower voltage and are resistant to heat. What sways my opinion is that they are already in use in 5G enabled phones, domestic lasers (CD, DVD and Blu-Ray players) and if you have a wireless mouse it too is more than likely to house one.

The reality is that silicon is unlikely to disappear anytime soon. Hence. I would argue that we are at the transitionary state between two technology S curves, silicon and compound semiconductors, and will know by 2025 if compound semiconductors are the new heir apparent.

Neilsen’s Law

Similar to Moore’s Law, Neilsen’s Law is concerned with Bandwidth for high end users, stating that is grows 50% annually – which can be demonstrated over the last 36 years. The key here phrase here is that it for ‘high end users’, where as Moore’s law has been for the masses. As to why this doubling of bandwidth doesn’t automatically benefit the average user is due to the fact that if you buy twice as fast a computer, your software runs twice as fast, where as if you get twice as big a modem the web pages do not load twice as fast.

The speed of the internet is a function of both the individual user’s connectivity, the infrastructure and bandwidth of the content provider. Hence, why pay a premium if it doesn’t provide a discernible benefit until 2-3 years later, when the mainstream catches up?

Essentially this is a ‘build it and they will come’ problem. Once you provide the infrastructure then developers will design services that maximise the capacity to which content providers will then utilise and mainstream users will will want to consume. A couple of years ago I was thankful to be able to stream SD video, with a high speed connection and UHD monitor, now my expectation is to stream video in 4k.

Where are we today?

In order to appreciate globally where we are, I stumbled across speed test global index, which provides a brilliant overview and data set to aid our understanding of global broadband speeds.

As of the March 2021 the average fibre global internet speed is 97.52 Mbps; this has grown by 29% in the last 12 months. This rate of growth needs to be considered in the wider economic context over the last 12 months, factoring in the impact of covid19 to global supply chains and national lockdowns being imposed.

However, when combining data from this site and the worldpopulationreview.com, we can see that only 32% of the world’s population have access to markets above this average. The remaining 68% of the world’s population does not live in a market with access to these speeds.

Interestingly, as there has been a greater need to work from home in the last twelve months, 62% of the world’s population live within countries which provide an average level of broadband connectivity (>50 Mbps) which can enable this capabillity. It should be noted that these values are country/city-state average levels; it doesn’t mean that all of the 4.7 billion people living in these geographies can afford this service. If we take a conservative view and assume 60% can achieve these speeds this equates to 2.8 billion or 37% of the global population.

Applying a 50% annualised growth rate to this data set, we can expect 83% of the world’s population, not factoring in forecast population growth, to have access to average broadband speeds of greater than 55Mbps by 2024, with the world average hitting gigabit speeds by 2028.

This may appear optimistic, as this is based on Fibre broadband being rolled out across the globe, which is notoriously expensive to deploy. However, 5G is already being rolled out in 30% of the world’s countries. This promises download speeds of 20 Gbps and 10Gbps upload speeds and has demonstrated, as of the end of 2020, download speeds greater than 500Mbps in the UAE.

Also, in contention to extend the global coverage of the internet is SpaceX’s Starlink project. This has plans to put circa 30,000 satellites into orbit and has already achieved 10% of this number, promising download speeds of 300Mbps by the end of 2021. This new entrant into the market will disrupt global broadband markets and once operational, will likely claim all the big investment banks and trading floors.

Why? Simply, because light travels faster through a vacuum than it does gas or liquid.

The estimated roundtrip time of a data packet, using underground sea cables from London to New York is 80 milliseconds, with Starlink this could move the same data packet at 43 milliseconds. THIS IS A BIG DEAL. Not only because every time sensitive industry on the planet will pay a premium to ensure that when they make a decision it is actioned straight away. Also, because London to New York is a relatively short trip, in data packet terms, the longer the distance for example London to Singapore the benefits are further elevated.

What are the limitations?

Within the known laws of physics the maximum data transfer rate is referred to as the maximum entropy flux. I openly admit that I don’t fully understand the mathematics that underpin this law, but have provided the links for more learned individuals to attempt to decode it. Essentially, what I take away from this is that we are nowhere near these limitations.

However, the real bottleneck that we are likely to encounter in the near term is that of energy consumption and heat. Essentially, as we build bigger and faster networks they consume more power, though there are existing research programmes which are looking to address these issues, though the use of spin lasers being one example.

What this does demonstrate, through the combination of these existing technologies (Fibre, 5G & Satellites), is that gigabit internet speeds, across the globe, are a real possibility before 2030.

This level of connectivity alone will enable new business models, greater immersive experiences both in a collaborative work setting and socially, supported by tools such as haptic interfaces and augmented reality. It is also likely to have a significant impact on economic migration, education, real estate prices….. the list goes on.

Kryder’s Law

This is the youngest of the three laws as it was coined in 2005 by Mark Kryder, the CTO for Seagate, who observed that magnetic disk areal storage density was increasing at a rate exceeding that of Moore’s Law. This led Kryder to predict that by 2020 a 40TB disk drive would cost about $40.

Obviously, this hasn’t been achieved. As of December 2020 Seagate had started to ship 20TB HDD with forecasts of a 50TB model to be made available in 2026. The nearest comparative deal for the cost price of $40 is 5TB of cloud storage per annum.

What are the limitations?

Within the laws of physics the Bekenstein bound is the limit to which information can be contained within a finite region of space which has a finite amount of energy. However, more pressing for HDD’s, since Kryder’s prediction, has been how fast the magnetic heads can spin. As the faster they spin, the more energy they consume, generating heat which impacts the longevity of the drive.

Since 2005, SSD’s have become a more desirable storage medium due to the lower form factor, speed and energy consumption levels. All of these have enabled the rise in mobile computing with the launch of the iPhone in 2007. The main blocker to mainstream SSD adoption over HDD has been cost; presently they are twice as expensive than HDD’s per gigabit, though prices are trending downwards.

Looking at when Kryder made his prediction, arguably it was towards the end of the HDD technological S curve. From a commercial perspective, consumers have since also had more storage options with cloud storage being baked into the price of existing software packages (Onedrive, Dropbox, iCloud etc).

This combined with greater connectivity and the Pareto principle – 80% of the time you will use 20% of the applications on your device. It would appear that storage volume is not as important as speed of access against fixed limitations of human attention spans. Hence, a hypothesis could be that market demands being satisfied which has resulted in less urgency for R&D to achieve Kryders predictions for either HDD or SSD.

With one eye on the future, a potential replacement for SSD is that of RRAM, which is more energy efficient, faster and has a greater areal storage density – 1TB fitting into the size of a postage stamp. This compounding of technological S curves, I suspect, could resuscitate Kryder’s Law.

I have not been able to validate it, but there are reports that TSMC are to start producing RRAM chips to gauge market reception, so we might not have to wait to long to find out.

So what? Why is this important?

These laws have been the driving force of the digital age, providing a regular cadence of progress which is deflationary. In isolation each law, from a human perspective, progress in what appears to be a linear motion akin to tectonic plates slowly edging across the seabed. The reality is that these laws demonstrate exponential growth, something which as humans we struggle to comprehend, resulting in earthquakes and tsunamis of innovation. If you don’t believe me, go and check the market cap of Apple compared to the FTSE 100, or the combined performance of FAANG stocks.

Essentially, everything that is impacted by all three results in the underlying cost effectively becoming $0, given a long enough timeline. Anyone who sold CD’s or DVD’s in the 90’s can attest to this, as streaming content, be it though Spotify or Netflix, in a higher quality format and with access to more content than they have time to consume is now offered at a monthly fee.

The deflationary nature of this progress is something that we need to be mindful of, taking into account that our economic models are configured for inflation and have amounted significant amounts of debt. For more information and thoughts on this I recommend reading/listening to Jeff Booth’s The Price of Tomorrow.

Tracking their continued progress, understanding any known limitations or technical hurdles, allows entrepreneurs, investors and Product Managers to predict when certain capabilities will become a reality and, as such provide a timescale as when the markets will be primed for the introduction of new disruptive business models. This ‘creative destruction’ has enabled the likes of the FAANG organisations to become world leaders, provides the same opportunities for tomorrow’s visionaries.

Final thoughts…

From the perspectives of the Laws of physics we are nowhere near the limitations of computing that is theoretically possible.

However, there are near term roadblocks for Moore’s law and Neilsen’s law, the latter of the two I am less concerned about from an engineering and technical perspective. The underlying trend of Moore’s law is at a transitory state from silicon to that of compound semiconductors or will become obsolete for general computing, as greater focus is poured into specialised hardware. The reality of this situation is likely to become clearer by 2025.

Although Kryder’s predications did not come to fruition, akin to Ray Kurzweil demonstrating overlapping technologies continuing the same exponential growth curve, the introduction of RRAM is likely to reignite this trend.

Personally, reviewing all three laws, the brightest star presently is Nielsen’s, predicting high speed global internet connectivity midway through the decade. As much as there is a large focus on A.I. and its impact on jobs of the future, I actually think that this is going to be a greater disruptive force over the next 5 – 10 years.

For mature markets it will usher in the possibility of even more immersive UI and UX, as development teams and product managers squeeze every available MB of bandwidth offered up to them. For online gaming and eSports this will only further cement their growth, all of which will require a greater number of datacentres to support this demand.

Though the bigger prize is that currently the majority of the population, viewed as either consumers, prosumers or workers are not yet living in geographies with average internet speeds greater than 55Mbps – I have chosen this number as this is the rate that studying and working from home, or wherever is convenient, is enabled. Even in developed economies where theses speed can ‘theoretically be accessed’ they are not affordable for all. This, however, is going to change.

If you live in an area where you have access to affordable high speed broadband, congratulations, you are probably enjoying the benefits of streaming videos, information overload and the frustrations of misinformation and social media. Though you are the minority sitting on the crest of the tsunami triggered by the aforementioned tectonic plates. The oncoming storm surge and its implications have not yet been fully appreciated due to the limitation of our event horizon.

So, as much as we think we have witnessed wide scale digital transformation, the greatest levels of societal, educational and economic transformation with previously inconceivable markets forged by the entry of new actors, who hold a greater variety of cultural and societal norms and perspectives, is yet to come.

This for me is a stark reminder of William Gibson, who said;

“The future is already here – it’s just not evenly distributed.”

– William Gibson
The Economist, December 4, 2003

Low code development
14/01/2021
Low code, Trends #appsheet, #citizendeveloper, #lowcode, #nocode
0

In the last couple of weeks I have been playing with AppSheet, a ‘low code’ platform. This was driven by a desire to tinker and to determine if this platform could enable a client’s use case. The client, a non-technical business owner who wants to further digitally transform their business, but is constrained by limited financial resources and a lack of experience in software development.

Playing around in this space forced me to re-examine the key players – a list of which can be seen here. Due to growing popularity, these tools are now more than ever in the consciousness of business stakeholders. With some, espousing that these are the answer to their IT development woes, envisioning a future where each employee is a ‘citizen developer’ – more on them later.   

Given the mixture of potential use cases, with large firms struggling to keep up with development demand whilst maintaining existing IT infrastructure, or medium sized firms and start ups who are keen to leverage a competitive advantage from these tools, I thought I would share my findings and thoughts on the subject. 

What’s all the fuss about?

These platforms are feature rich, enabling those who have some knowledge of software development the ability to create a basic applications, quickly.

This is due to preconfigured modules of code which users can either drag and drop using a workflow process map, or manipulate as objects, on screen. This approach has its foundations in the Modular Programming (established in the 1960’s) and Object-oriented Programming schools of thought. A tangible example of this, are the drag and drop components and containers present in an array of website builders for some time now. The difference is that these tools are focused towards developing business applications (apps).

Depending on the platform, they support both mobile, desktop and tablet development. The majority of which provide on screen emulators, enabling users to view real time updates to the front end.

AppSheet has the ability of spinning up front end UI’s based on data held in Excel or Google Sheets. There is also a very cool feature, where you state your user stories and relationships between tables, and the application then creates a backend data model and front end UI. Though I haven’t yet played with it, AppSheet also offers the ability to embed basic machine learning (Artificial Intelligence) into the solution, enabling autocomplete in search boxes or populating values into fields.

This can empower employees, taking an initial idea to a proof of concept/prototype in a relatively short space of time. Which, coupled with technical knowledge, can then be scaled. 

Citizen developers

A phrase which appears to be wedded to the low code scene is ‘citizen developer’. For me this instantly qualifies to join the list of phrases applicable for ‘bullshit bingo’*.

Putting my dislike of the phrase to one side, my key objection is that it is misleads users as to capabillity of these tools. The intended implication is that they are so well configured and intuitive, that you do not need computer science knowledge to build an app. This is not strictly the case. 

Yes, with limited knowledge you can create a basic mobile app to capture data via a form. Though, in order to develop an app with multiple user profiles, create dashboards, or map relationships in the data model, users will have to invest time in learning the platforms and some computer science basics.

The reality is that you will need to invest in staff training to realise the key functionality of these platforms. So the expectation that the majority of your workforce will be able to make a standalone business app, is misleading.   

Training and Support

Therefore, key to each platform’s success, outside of its features and UI, is the training and support model for novice developers. To date I have developed basic apps on Mendix and AppSheet.

Mendix has a training academy with dedicated pathways based on your experience level. These are comprehensive and well structured, though they are not quick. I spent over a week learning the application in order to build a beginner app.

Early modules of the training are conducted in a web based editor. Though in order to complete the app you must download a desktop development environment. Here, I found that things get technical quickly. Encountering issues with bugs in the training to which the support forums were of limited help. It is only with some previously acquired technical knowledge and perseverance, that I was able complete the application. There are lots of business users who at this point will lose interest.

For AppSheet, which I feel has a more intuitive UI to develop in, the training in comparison is pretty poor. AppSheet, prior to being acquired by Google, provide weekly YouTube videos to answer questions. There are also developer’s forums and a help section.

These are ok, but as the intended user has limited to no IT knowledge, then these resources are not user friendly. For example their YouTube videos generally last well over an hour and make searching and accessing relevant information awkward. There are other content creators who post more structured training on YouTube, AppSheet Training being the best that I have found to date. This product, however, could be improved by providing a coherent pathway for novices.   

Just more shadow IT?

There will be IT managers who will view these tools as another subset of shadow IT. This requires them to support indirectly and continue the ongoing game of ‘whack-a-mole’ – replacing over time a myriad of apps built on different platforms with overlapping use cases. 

For those of that mindset, these tools can be a valuable addition to your development armoury. Remember, it’s all about providing value. Generally, when asking stakeholders what provides the greatest amount of value, they either struggle to communicate it, or provide a barrage of contradictory terms and conflicting requirements.

In this scenario, it’s common for an IT representative to structure an approach to elicit a consistent response. This can easily result in frustrated stakeholders due to a perception of inaction. Hence, these tools can help break this deadlock, by facilitating the creation of tangible prototypes. This can then be scaled using cloud or on premises resources. Hence, they have a place in a software development continuum, spanning proof of concepts through to full blow enterprise applications.

In a workshop setting, combining the relevant stakeholders and technical skills, low code tools will enable the creation of tangible applications, not documentation, in days, if not hours. This can then be deployed to capture feedback, informing future development. This helps IT to break down barriers with business stakeholders, mitigating the risk of them going it alone.

Verdict

For those who don’t have a technical background and feel that their IT department provides minimal value, low code tools definitely reduce the barriers to building apps quickly. However, don’t get seduced by the hype. They are not the ‘secret sauce’ to instantaneously making an organisation Agile. In order to get the best out of them, you will need to either invest in training, or hire relevant resources. You will also need to partner with your IT Department to ensure that what you do build can then be supported and scaled, once it becomes a hit to emailing spreadsheets. 

For the IT professionals reading this, I would recommend taking a proactive approach when it comes to low code. These tools, if not already, will end up permeating your business. So rather than resisting, pick a platform and set up some hackathon workshops. Yes, even though this will require a concerted and continuing effort in stakeholder managment, the alternative is back to playing ‘whack-a-mole’.

Medium sized firms and start ups, have a play. There is some great functionality, though utilising it is dependent on how willing you are to learn the platforms. There are consultancies as well as freelance developers who will be able to leverage these tools on your behalf. Depending on your tool of choice, you may have to shop around to find the rights skills at the desired price point. 

If you are unsure of what platform is best to meet your use case, or how best to adopt a low code solution, please feel free to contact me for a free consultation. 
 
*yes, I am aware that management and IT have made a considerable contribution to this list already.

The Art of Scoping
26/10/2020
Transformation #Producttrees, #programmemanagment, #projectmanagement, #scoping, #WBS
0

What is scoping?

Scoping encompasses the relevant practices and processes, to ensure that the project provides all the deliverables required from it and does not expel any unnecessary effort. In the context of this article, I’ll borrow the definition outlined by the PMBOK Guide in that the term scope can refer to:

  • Product Scope – features and functions that characterise a product and or a service
  • Project Scope – the work required to deliver a product or service with the specified features and functions. Thus, project scope can encompass product scope or an element of it

For those of you not familiar with the PMBOK, it also states a logical flow of activities that the Project Manager should perform in order to scope a project. These are:

  1. Plan Scope Management
  2. Collect Requirements
  3. Define Scope
  4. Create WBS
  5. Validate Scope
  6. Control Scope

However, in this article I am going to deviate from the PMBOK and jump straight to Collect Requirements.

Why?

Firstly, because part of the ‘art’ of scoping is appreciating that the application of the above steps, in the real world, will require you to make a number of iterations. Secondly, because this article, in part, provides an outline which can be reduced to a set of bullet points, which will form the basis of your initial scope management plan; essentially capture this once you’ve read this article and you’ve met the criteria for step one.

Though the more important point that I am keen to get across is based on my experience of scoping, I am yet to encounter a sponsor or group of stakeholders that has clearly defined, even partially, ‘what’ they want to achieve. Without going into the multitude of reasons as to why this doesn’t happen, your area of focus should be on what is required in order to progress the project successfully, which is to establish a shared and agreed understanding of what the project will and will not do, a.k.a. the scope. Hence, the first weapon from your armoury which you will need to blow the dust off are your stakeholder management skills, as you’ll need to ensure that you talk to all the relevant parties and capture their interpretation of what the project will do. So, let’s get started.

Collect Requirements

Step one is to request, if already documented, any existing requirements. If there is something, even if this ‘document’ also contains the words ‘Benson and Hedges’* be grateful. It means that they have provided some initial thought into ‘what’ it is that they want. Your job now, with the assistance of a Business Analyst if required, is to refine your understanding of these requirements. To aid you in this process, engage the local organisation and / or PMO to determine if there are any standard tools or templates that you can use, or are required to meet any internal governance criteria . Also, if you do have resources supporting you, agree on how you will document and store the requirements going forward – this forms the low level basis of your Requirements Management plan. Ideally, Requirements should be stored in a Requirements Traceability Matrix which can be as basic as post it notes on a whiteboard, an excel spreadsheet or a dedicated system.

You are likely to find that, at this stage, the requirements that you have raise more questions than answers, or that nothing has been documented. Subsequently, the second step you will need to take is to organise a further round of discussions to clarify any questions and elicit requirements from the relevant stakeholders, which will invariably begin to flesh out the next level of detail.

Approaches that you can take to achieve this include dedicated interviews with key Stakeholders, or you can establish focus groups of Subject Matter Experts (SME’s). Which you opt for is dependent on your experience as a PM and that of the organisation and customer. Personally, I would recommend interviews with key stakeholders as a first step, as you are likely to need to build trust with them and having a private environment to discuss the project can facilitate more confidential information to be shared. This can be key to the overall projects success, as it will likely inform your future decision making process.

The other benefit of this approach is that you are likely to have some stakeholders who are more extroverted than others when it comes to sharing their opinions – by giving them each their personal space it ensures that no one voice is dominating the discussion. The downside to this is that it is time consuming, and certain stakeholders are likely to have busy schedules. Hence you will need, at some point, to hold a meeting with multiple SME’s. When doing this be clear on what the purpose of the meeting is and have a clear agenda in order to structure the conversation.

In order to mitigate a single voice dominating the conversation, make a concerted effort to ask the opinions of quieter members whilst in the meeting. You can also use active listening techniques to summarise your understanding of key points and clarify if everyone agrees with your understanding – this will give others the opportunity to voice their perspective.

Solution mode

Regardless of the meetings that you hold, you are likely to encounter stakeholders who have already jumped to ‘solution mode’. This can be characterised as being told ‘what will fix a problem’. This invariably will be based on preconceived ideas and assumptions which may not address all the underlying issues. Hence, a tactic to address this is to use an approach such as the ‘5 whys‘; asking ‘why’ five times in a row in order to find the root cause of said assumption or issue. Another alternative is using an approach such as Design Thinking to foster divergent thinking to build consensus on understanding the problem, before attempting to use convergent thinking to design the right solution.

You may get some ‘push back’ from stakeholders who have already, in ‘their mind’, solved the problem and that their solution just needs to be built. Resist the urge to blindly follow this person. As well-intentioned as they may be, from my experience I have not yet met a stakeholder who has doubled as a ‘design genius’ and a ‘mind reading savant’ of the customer, we are all fallible. It is highly likely that this person is what can be described as a dominant action-orientated personality type**, ultimately they want to see results and are less interested in the process.

As such, they are likely to perceive anyone not agreeing to immediate actions as a blocker. In order to harness their enthusiasm, without them dominating the process, I recommend creating a prototype as it demonstrates tangible progress and allows you and the team to clarify requirements, helping you to validate the scope.

Prototyping

I cannot stress enough the benefits of creating a prototype as quickly as possible. It will help all involved to get to a point of a shared understanding of the scope of the solution, enable the elicitation of the requirements and help to identify any underlying assumptions, allowing the team to learn as quickly as possible. The thing to remember with prototyping is that is can be exceptionally low tech – hand drawn sketches on paper as an initial first step can be all that you need.

For a brilliant example of what Eric Reis’s refers to as ‘Wizard of Oz testing‘ in his book The Lean Startup, Reis outlines the exploits of a team creating a product called Aardvark. In order to create a working prototype they hired humans to replicate back end functionally to demo their product and confirm with their customers that what they were creating provided value. This example demonstrates that you don’t have to get drawn into a full on development cycle to create an ‘all singing and dancing’ prototype. All that is required is a little imagination and creativity.

Depending on the nature of the desired product and the maturity of the organisation creating it, you may have access to more elaborate tools. The point is to capture something as quickly as possible which will elicit a conversation to help you and those involved build consensus as to ‘what’ it is that needs to be built, and ensure that it has value to the intended customer.

WBS/Product Tree

Once you’ve captured a list of requirements via interviews, workshop or potentially building prototypes, you will begin to get a clearer idea as to ‘what’ is the scope, though may not yet have captured anything which clearly depicts the scope to other stakeholders. It is at this point I recommend creating a graphical representation of it. This is sometimes referred to as the Work Breakdown Structure (WBS). I have also encountered variations of the same concept in Product Management and Agile tools and techniques with Product Trees being a prime example of the same concept inverted. All you need to get started is a a whiteboard or pad of paper, some post it notes if desired and some pens. For more information on how to create a WBS, check out this video:

Regardless if you capture a WBS or Product Tree the key benefits for depicting the scope graphically is that:

  • You can structure it however you see fit; features of a product, phases of a project etc.
  • It enables decomposition, breaking down deliverables from the upper levels, down to each Work Package or User Story. Creating a clear list of deliverables, which you can validate and prioritise with stakeholders
    • N.B. you don’t need to decompose to a uniform level across the diagram, some elements will go to greater level of detail than others
  • You can also include Enablers Work Packages or Sub Components which will be delivered by third parties, identifying dependencies
  • Once you have decomposed the elements, you can further plan, manage and control the project to a greater degree

When creating a WBS or Product Tree, it should be an inclusive task, which will allow you to pull on the expert judgement of SMEs who may have created or delivered similar products or projects previously. Hence, you can also create a draft WBS in order to help elicit a conversation with them and use this as part of a meeting or workshop, if required.

You may find that some deliverables are not be able to be decomposed at a particular point in time, this could be for a multitude of reasons (e.g. they will not be achieved until far into the future or theirs is a lack of knowledge at this particular stage). Don’t fret, this is an iterative process. The only other thing to be wary of is attempting decompose down to an excessive level, which is clearly a waste of your time and effort and of those assisting you in the process. The key question that you need to ask yourself is, can the lowest level of the diagram, for example a Work Package or User Story, be delivered in a project reporting /Sprint cycle? If yes, then you have the right level of detail. If not, then you may need to refine it further.

Validation

The point, up to this stage, has been to engage all the key stakeholders and look to build consensus around what it is that you and the team have been tasked with delivering. This may have taken multiple iterations whilst you refine your understanding, though you should have enough information to capture a scope statement. This will include the following points:

  • Description of the scope
  • List of deliverables
  • Acceptance criteria
  • Exclusions – what is outside of the scope?
  • Justification – background as to why the project is needed
  • Assumptions – document any if known and outline how you will look to test these

You may also find that the first time that you are responsible for scoping a new project that you have the underlying feeling that you’ve missed something. This unease isn’t a bad thing. Ultimately it means that you take pride in your work and want to ensure that you do it to the best of your ability. However, you can’t let this cripple you into a state of inaction. In order not to fall into the trap of ‘paralysis by analysis’ you need to get this scope statement shared with the Sponsor and key stakeholders in order for them to validate and ultimately seek their approval.

The nature of the organisation that you are working in will determine if there are any set formal documentation and or procedures you need to undertake at this stage. Check what these are with the Head of the PMO. If there isn’t one, ask other Project or Programme Managers as to what is the correct procedure. In the absence of anything being defined formally, the bare minimum is that you need to play this content back either via a Steering meeting or engaging stakeholders on a one to one basis in order to seek confirmation. Ideally this will result in them providing written confirmation via an email, following a meeting where you have presented this content to them.

Scope Control

Congratulations! Upon receiving confirmation, you have officially now defined and baselined the project’s scope. I have been in situations where this can be cause for celebration, given that it can take multiple attempts before all key stakeholders are aligned and agree with on another, though it is not the time to crack open the bubbly. The project team, if the roles have already been defined and mobilised, will be keen to crack on with delivering as quickly as possible. Though as the PM you need a plan as to how you are going to control the Scope going forward, as Requests for Change (RFCs) are inevitable, to which you will need the project team and the Sponsor to understand their roles and responsibilities in supporting you with this.

Change Management plans are outside the scope of this article, though I would stress you need to tailor your approach depending on the perceived likelihood to which you will encounter RFCs. I have worked on Telecommunication rollout programmes where there is a dedicated resource who manages this process alongside project managers, as well as Software Development projects, where the process was not been formally documented. Though, due to the need to increase funding on the project to cover resource costs, to deliver additional features, this RFC was treated formally and was a regular item in steering meetings until resolved. In short, have a plan, even if it is a set of bullet points in an annex of a steering meeting presentation.

Summary:

So there you have it! I hope that you appreciate that scoping is not as rigid as the PMBOK structure suggests and that with a little time and experience you’ll come to appreciate that scoping is an art, rather than an exact science, which relies heavily on stakeholder management and communication skills alongside using tools such as building low fi prototype and capturing a WBS or Product Tree. The key is to build consensus across stakeholder groups using graphical representations and formal documentation prior to having what is captured validated and approved.

*Please note that the author of this article does not in anyway support smoking, or any particular brand of cigarettes for that matter

**For more information search DISC personality types

  • Reis, E. (2011) The Lean Startup. St Ives: Portfolio Penguin, pp 103-106
10/06/2020
Transformation Product Management, Program Management, Programme Managment, Project Management, Stakeholder Management, Steering meeting
0

For some, the mere mention of the words ‘steer co’ or ‘steering meeting’ can cause them to start profusely sweating, knowing that the project sponsor, along with an assortment of senior stakeholders, are going to scrutinise all the team’s hard work to date, as well as pull your plans to pieces!

Fear not, from my experience this is far from the case. These meetings are as important for you as a project or programme manager as they are for the attendees to ensure that the initiative is a success. In order to help you through what can be a stressful experience, especially for first time PMs, I have created the following guide.

Meeting Agenda & Content

The first question that you need to ask yourself is, ‘What is the objective/purpose of the meeting?‘. To answer this question you need to start with the end in mind. What do you want to achieve by having this conversation? It might be an endorsement of a decision, ratification of an existing plan based on an update, a request to support resolution of a key issues/blocker. As part of structuring the agenda and doing the necessary preparation work, you need to be able to concisely articulate what you and the team responsible for delivering the Project or Programme need to get out of the meeting. Once you have determined this, you can document an agenda and create the supporting artifacts to enable the desired conversation. Ironically this is the first step to you positioning or ‘steering’ the meeting prior to it occurring to ensure an actionable outcome.

The next consideration is how long is the meeting. This will be dependent on the complexity of the required discussion and the availability of key stakeholders. If stakeholders can only make a 30 minute slot and you have created enough content to cover a 45 minutes, without any time for questions and a discussion, then you’re going to need to streamline and / or summarise the key content. A rough rule of thumb, depending on your meeting objective, is to structure this around 5 slides.

  1. Restate the problem statement and or purpose of the meeting – allow your stakeholders to refocus their attention. Most will have come from a previous meeting and / or may not have read the agenda and meeting invite, so help them to focus on why you have requested their time
  2. Context – replay a high level historical overview and any lessons learnt captured to date. This should provide the necessary background to the discussion and might include any previous actions step by the Sponsor and the meeting attendees
  3. Analysis – summarise the analysis completed to date, outline how data has been captured as well as the positives and negatives of each option reviewed
  4. Recommendation – outline the recommended option, the implications for the Project (cost, time and scope) as well as the necessary support from the Sponsor and key stakeholders
  5. Call To Action – this is where you close the meeting out and summarise what the next steps are

Supporting content that should also be shared either via a link or as an appendix and updated for every meeting, but may not necessarily feature in the main discussion, is:

  • Status update – this should cover progress made to date, what the planned next steps are, overall RAG status of the project and if any support is required and from whom
  • Financial health – the minimum requirement would be a slide or dashboard showing planned spend versus actual spend to date and a summary explanation outlining what was being done if it is not within an agreed tolerance level
  • RAID – Summary of the top Risks, the categorisation of each and action taken to date; issues, action taken to date and if any assistance is needed and any changes to Assumption and Dependency logs
  • Change log – this should show a list of Change Requests, the source of the request, the requests status, business value and Impact Assessment. Please note, if a Request is within the Project Manager’s agreed tolerance level to approve then this content may not be necessary. If you are going to include a Change log make sure it contains the key items that require the Sponsor approval.

A steering meeting should never be just a status update. Yes, a status update needs to be included, but if you do not want an engaged sponsor and supporting stakeholders, holding a meeting to read through a slide is the quickest way to ensure that they don’t turn up to future meetings when you do need them. If you don’t have anything significant to discuss, respect their time and share a status update via email. In the long run they’ll thank you for it.

Stakeholder Management

These ‘soft skills’ which, along with the hard skills showcased in the supporting content, are what determine how success levels of your meeting. These take a considerable period of practice and both good and bad experiences to finesse. So don’t beat yourself up if things don’t go to plan. Below is a set of high level approaches and strategies to get you off on the right foot and make your meeting a valuable experience for all.

Firstly you need to make sure who is attending the meeting. For a successful meeting you will need a:

  • Sponsor – accountable for the overall success of the initiative and is able to champion the project’s cause with senior stakeholders if required
  • Financial representative – a stakeholder who has the authority to talk to financial matters and can address/provide guidance on any issues around budgeting
  • Customer representative – if discussing an intra-organisation product or project, this will be someone from Operations, if an external Product this could be the Product Owners or Manager
  • Support functions – HR, IT, Marketing, Design, Compliance and legal authorities

As you have probably guessed the first three roles are key for every meeting. Depending on the nature of the discussion, the representatives under the banner of support functions will need to be invited on a case by case basis but should be included in any status updates and or meeting minutes, if they are not able to attend. This will be down to you to as the Project Manager to determine if lack of representation at a meeting is a blocker to progress. If so, work with the remaining stakeholders to determine what can be done to resolve or mitigate this issue.

Warming up

Prior to any key meeting, engage key stakeholders and share a sneak preview of the discussion to use firstly as a litmus test to capture any feedback and or concerns as well as determine if the message that you are trying to convey is understood. Remember ‘the art of communication, is the response that you receive’ (for more on this see Active listening). I am yet to meet a Sponsor who enjoys a surprise in the middle of a meeting and they will be even less forgiving if they happen to be the last to know a key piece of information. Even if you are only able to grab five minutes with them prior to the meeting, do so. If you are not able to do this with everyone attending, see if you can split the responsibility with the colleagues that you have shared this information with. This will give you an idea of what ‘push back’ you are likely to encounter and be able to prepare responses to these in advance.

Personality types

Depending on who is involved with the meeting you will have to learn to get a feel for key stakeholders’ personalities and adjust your style and content accordingly. Things to consider are:

  • How do they like information presented to them? Do they like diagrams or concise, bullet points? A key way to get a feel for what works is try and see if they have created a presentation previously or if there are examples that they particularly like. You can then take the core approaches of these and use it to inform your approach.
  • Key phrases or terminology that triggers a positive or negative response – I have had stakeholders who in the past I know the use of a certain phrases will cause their hackles to raise. If that is the case, remove them from your presentation and make a concerted effort to remove it from your vocabulary, at least in their presence.
  • Attention to detail – some stakeholders will want to go through every calculation and value as well as scrutinise every typo and use of bad grammar. Others will not care. Of these two, I guarantee that you will encounter someone who fits the first description at some point. My advice is to get someone to proof read your content as a quality check and as part of ‘warming up’ exercise go through any key bits of detail with this stakeholder in advance. Based on my experience, these people are generally trying to drive high standards, even if they are demonstrating it in way that can seem anally retentive. Hence, embrace the challenge and see it as an opportunity to silence them. The best way to do this is demonstrate that you can answer their questions and provide them the necessary detail. Though please note that by addressing these detailed questions you should do so with content which you can store in the appendices and reference in the core of your presentation. For more on this topic see handling awkward questions below.

Environmental factors

The other thing to be mindful of when preparing content is to determine what are the norms and practises of the organisation of the client/sponsor things to consider is if content needs to follow any specific branding guidelines and or formatting standards. Also depending on the customer they may be more formal and like slides issued in advance. If you are not sure what the cultural norms and etiquette of the client organisation, ask someone who has done it previously and if there isn’t any one, contact the customer directly and outline the intended approach.

Meeting invites

On every meeting the following three things should be stated and can be remembered by the acronym PEPs, which stand for:

  • Purpose: state the purpose of the meeting
  • Expectations: state what to expect (e.g. this is a presentation to facilitate a discussion on issue x, or this is a workshop to identify how we can address issue y etc.)
  • Prerequisites: outline for those attending if they need to do anything to prepare for the meeting. This is a key piece of information if you are planning to hold a workshop.

Handling awkward questions

Trust as a professional is one of the key assets that you need to develop with all your stakeholders, especially those who are part of your steering meeting. The best way to do this is through transparency and a regular cadence of value. You are also likely to encounter a question that you haven’t prepared for. No one likes not being able to answer a question. However, people like it even less when you provide them an answer and it turns out to be incorrect. It erodes trust which is the key currency that will enable you to deliver the project successfully. Saying ‘I don’t know’ is not a cardinal sin but remember it is not what you say but how you say it. If you encounter a question that you can’t answer, firstly use active listening techniques to play back the questions that you have heard, this will allow you to collect your thoughts but will also allow the stakeholder who has asked the question to confirm your understanding or rephrase the original question in a different way. If you still are unable to answer, acknowledge that you can’t answer it and politely state that you will take an action to find the answer and get back to them.

Meeting deviating from the Agenda

There is a possibility that the meeting could deviate away from the agenda. If you are using the 5 slide structure that I outlined above this could occur from the Analysis slide onward. If you have followed my suggestions around warming up key stakeholders, this should help mitigate this risk, as you should have already captured their concerns and feedback and be ready to address these. Though sometimes unforeseen events occur at short notice which will cause stakeholders to deviate from the discussion, or go off on tangents. Your job in this situation is to deploy active listening techniques and determine if the discussion that is being held, though unforeseen, can benefit the issues that you are trying to get addressed. If you are not sure that this is the case and this conversation has gone on for more than 5 minutes. Politely interject and replay back your understanding to those having the discussion, once they have confirmed that your understanding is correct, clarify how this relates to and restate the purpose of the meeting. If they acknowledge that it doesn’t, suggest that you’ll set up a separate discussion to facilitate that point at a later time which should enable you to get back on track.

Post meeting

Ideally within 24 hours, capture a summary of the meeting whilst it is fresh in your memory; this only needs to be a couple of sentences long. Also, capture the actions from the meeting and issue this along with links to the content that was presented to all on the steering group. You can also take this opportunity to outline when the next scheduled meeting is likely to be.

Nerves

A friend of mine, who is an ex bouncer, recently regaled me with a story in which at a nightclub that he had been working at, had descended into a mass brawl. Bottles and chairs were being thrown, people physically scrapping left, right, and centre; obviously not a normal night and one which for the majority of us would be extremely stressful. Obviously he and his colleagues had to step in and resolve it. Though before doing so, he scanned the room. This allowed him to assess what was happening but more importantly identify the biggest risk to him and his colleagues, as well as anyone else who happened to be at the wrong place, at the wrong time. For him, this was a person at a bar who was sat laughing and smiling. For this person the evening’s events and ensuring chaos wasn’t in the least bit stressful but actually pleasurable. That, for my friend, was the biggest threat in the room. Namely because he knew from experience, that the moment that you are comfortable in a setting it means that you have experienced it multiple times and are ready to react. The key takeaway from this is not for you to go join fight club, I’ve endured enough injuries as a result of being on a rugby pitch at the weekend to know that a black eye on a Monday morning is never a good look. What this does tell you is that, with experience, being in a situation this will translate to familiarity enabling you to cope.

I appreciate that if you are someone who is currently petrified of presenting, the idea of putting yourself under this level of stress so that at some point in the future you’ll be able to cope, may not be that comforting. However, the quicker that you engage with this the sooner you will be to addressing your nerves. Though I strongly recommend to look at developing strategies in order to help you cope. For those interested in learning how you can trick your body into positively using this nervous energy, an example of which is Amy Cuddy’s TED talk discussing how you can positively tap into your body language. Other approaches include using prompt cards as an aide memoir and performing your presentation to a peer or trusted colleague who can provide feedback. The other option is to use your smartphone to record yourself presenting, which you can then review.

Summary

To ensure that you ‘steer’ your steering meeting the key take-away’s from this article are, to define the objective of the meeting; structure your agenda and presentation to elicit approval for a recommendation or decision; identify and warm up key stakeholders; get a colleague or friend to ‘sense check’ your presentation for grammar and typos and practice your active listening skills to help clarify any awkward questions and give you time to think. These skills will help you navigate your first steering meeting. There are other soft skills that you can develop which I haven’t outlined above though would strongly recommend observing other’s presentations in order to help you find your style. Though do remember that getting comfortable in front of an audience is a skill just like any other, it just takes time and practice.

02/06/2020
Transformation Products, Programmes, Projects
0

What’s the difference between Products, Portfolios, Programmes and Projects?

Recently, whilst overseeing a Portfolio of Projects for a client, the IT Director requested that I come and provide assistance to help clarify the scope and define an appropriate governance structure for a new piece of work.

The issue they had encountered, as a partial Sponsor, was that though the initiative had been approved funding, no one directly involved in the initiative could clearly articulate the scope and in essence ‘what’ it was that they were trying to achieve; the approval had been given on the basis that it had a vocal senior Sponsor. The IT organisation had provided approval due to it not wanting to be perceived as a blocker, and hence felt that by applying specialised resources to review the request and determine the scope of said initiative, could then advise an appropriate course of action.

I’ve witnessed similar situations in the past, and have heard of similar scenarios from colleagues in different organisations in different sectors. Hence,  I’m reasonably confident that this isn’t an anomaly and thought it useful to clarify the different types of ‘P’s that professionals are likely to to encounter and highlight the high level nuances of each.

Portfolio Management

A portfolio can refer to a set of projects, products, programmes and sub-portfolios and or a mix of all four. The key thing to remember with a portfolio is that within it the constituent projects and programmes etc. they may not be inter-dependent or directly related.

Ultimately, portfolios exist in order to manage and align business strategy to that of execution; to achieve the business’s strategic objectives as cost effectively and as quickly as possible. Accordingly, management of these and the associated governance structures will reflect this. With a primary focus on assessing each respective piece of work’s business case and associated key criteria (scope, risk profile, business/customer value and the associated cost and time estimates), whilst periodically reviewing approved work to ensure that they are on track to achieve there stated objectives.

Depending on the organisation’s maturity, the way of assessing which requests receive the required resources will be to apply a scoring/rating to each criteria and then use a weighted average calculation in order to provide an overall score. The request with the highest scores will receive the relevant funding and those that don’t will either be mothballed until funds become available or cancelled.

Product Management

Product Management, is focused on

‘the building of desirable, feasible, viable, and sustainable products that meet customer needs’

© Scaled Agile, Inc.

As such, in order to design, develop, test and deploy a new product into the market, products can employ both basic and or complex structures consisting of portfolios and or standalone projects, depending on their complexity and the organisations market offering. For example an organisation may have a portfolio of products of differing levels of maturity within the product life-cycle requiring .

If it is an established product, it should also have a supporting organisation dedicated to servicing existing customers. If it is a new product one of the elements that you will need to clarify is who will support the product once it is launched and what is the handover process to them? You may find that if it is a brand new product you may need to define and recruit a support organisation, which will provide this function once the product is launched. Depending on the size and scale of the product, establishing this organisation could become a distinct project.

Programme Management

Programme Management, defined in the PMBOK as

‘a group of related projects, subprograms and program activities managed in a coordinated way to obtain benefits not available from managing them individually.‘

PMBOK 5th edition

Programmes, thus, are made up of projects which have dependencies and may also have elements of related work outside of the scope of one or many of the constituent projects. A project can also exist outside of a programme.

Therefore, Programme Management is focused on ensuring that the activities and projects within its scope achieve the overall programme benefits statement. In order to achieve this objective, the Programme Manager is likely to devise an additional tier of governance and set of activities to align and steer the associated projects and dependencies to ensure that these are met.

Project Management

At the lowest level of the collection of ‘P’s’ are Projects. These are temporary organisations assembled in order to provide a fixed result in the form of deliverables, which inform the project’s scope.

By their very nature, projects have fixed beginning and end dates at which point the project is disbanded and the deliverables are handed over to a customer. Examples of a deliverable would be working software or the delivery of a physical document/certificate or piece of hardware.

Project Management, at its very essence, is about clarifying the associated deliverable(s) with stakeholders and overseeing the management of resources to produce these in relation to cost and time constraints. These three elements – Scope, Cost and Time – are sometimes referred to as the ‘Iron Triangle’.

Hierarchical structure

Example hierarchical structure.

The above hierarchy is intentionally complex in order to highlight how Products, Portfolios, Programmes and Projects can be structured. Though it should also be stated that Projects can also be in complete isolation. This in part can be a partial explanation as to why there is sometimes confusion caused around these terms. Thought the key takeaway is, when scoping a new initiative, to focus on listing deliverables, be these in the context of Features and User Stories if developing a software solution, or hardware and supporting documentation if delivering a physical good.

Summary:

All these approaches exist in order to deliver value to the customer as efficiently as possible, whilst managing finite resources. Regardless of the complexity of an initiative, the key is to drive clarity around a list of deliverables as this will be the baseline from which you can determine what value/benefits that the project or projects are providing and from this determine if additional levels of governance are required. This should be determined post the primary activity of scoping a project, which is an iterative process. A first pass of playing back to stakeholders the deliverables that they require, will allow you to determine what it the desired end state, an appropriate delivery structure and a supporting governance model in which to steer the initiative.

No comments to show.

© Trends & Transformation 2026
Powered by WordPress • Themify WordPress Themes