Looking to the future is always a risky business but as part of my role here at Vodafone Business, my job is to look ahead and set us up to succeed in 2-5 years’ time.
To do this, we need to take some punts on where we think the technology and market trends will take us.
That insight is then used to shape our strategic roadmap to ensure we’ve got the products our customers need, when they need them. Let’s start with artificial intelligence (AI) and machine learning (ML).
AI and machine learning are running in the network today.
We’ve seen some vendors fix up to 90% of network faults without any human intervention1. This allows engineering teams to move higher up the value chain, delivering a better return on their skillset.
It also means customers can really put their trust in the availability of the network more than ever before.
Our Intent Based Infrastructure is an example of a self-healing network based on Machine Learning. It constantly analyses the events generated across the network, looking for recurring patterns or anything out of the ordinary.
If it detects something that poses a risk to a part of the network, it starts mitigating the impacts before they’ve caused any issues. This could be anything from diverting traffic around a faulty connection while it’s fixed or increasing capacity on a link that’s experiencing abnormally heavy usage.
AI and ML can’t succeed in isolation though. They’re completely dependent on other functions like multi-domain orchestration and API-gateways.
These tools are the foundation of a successful network evolution and their importance can’t be overstated. This is where a lot of the investments are now being made and their benefits will be there for all to see in our intelligent networks of the future.
Enterprise networks have had a similar set of service measures for decades.
Customers and providers alike have focused on how long a circuit was up for in a given period of measurement, or how long it took to fix if something broke. This was pretty easy for everyone to understand but it is a very binary measurement approach to service.
That’s where the human experience comes in to play.
Network providers will layer software into their services that use a mix of synthetic transactions and actual customer traffic to generate a satisfaction score for the network. It will provide a proxy to allow service providers and customers to see how any application is performing across the network.
It’s taking the Mean Opinion Score we’ve been using for years and scaling it across the whole application estate.
We want to measure the quality of the user’s experience from the start to the end of the journey.
The Transmission Control Protocol/Internet Protocol (IP) is one of those inventions that never gets the credit it deserves.
It was first put in to practice in 1974, and to cut a long story short, allows networked devices to talk to each other. Without it, you wouldn’t be able to send emails, stream Netflix or buy that smart speaker you’ve had your eye on for a while.
Over four decades, networks have improved by orders of magnitude, both in terms of reliability and capacity. The reliability is a key change and underlines one of the main reasons we need to evolve the infrastructure.
Consider a typical network circuit in 1974. It had as much chance of losing some packets between A and B as it did of sending them all successfully.
This meant that the protocol we used to govern the network had to take this into account and perform a lot of error checking. We also had to build in checks and balances to prevent circuits and routers becoming overloaded. This is where we get the multitude of handshakes, checksums and window sizes that determine how traffic crosses the network to this day.
The blunt truth is that for most locations, we don’t need some of these functions now. We can trust that a packet sent from one router to another will arrive in the right format, and in the right order. This trust allows us to streamline the amount of information we’re sending which means we can deliver faster networks at the same cost as the platforms in use today.
This is a long-term aspiration though – it will take decades to replace TCP/IP. Until then, there is a lot of work to be done to design the next generation of network protocol. One worth watching is Named Data Networking.
One of the key governing principles of the Internet is that traffic from any user or organisation has the same level of importance.
The ‘net neutrality’ principle has allowed the Internet to grow at incredible speed to become the platform that binds the world together. But… (there’s always a but) some applications are fundamentally more dependent on reliable performance than others. Shouldn’t they be given priority over other traffic?
The pandemic has shone a laser focus on video conferencing applications in particular, and how they perform over the Internet.
As service providers, we’re already allowed to apply some prioritisation of an application across the network if we can prove that it’s necessary.
At the moment, that doesn’t go far enough to support the hybrid approach to work that we’re heading towards as a population.
The largest tech providers are likely to have something to say on this soon, and the reality is that we’re in a much better position to apply some prioritisation now.
Customers with fibre broadband services regularly get speeds over 50Mbps, so they have the available bandwidth to accommodate some traffic profiling. Extrapolate this capacity across the network as a whole and the Internet is tending more towards a business network than ever before.
Boundaries are still important though - what we won’t do is prioritise one organisations traffic over another’s. We should allow each user or organisation to set their own relative priorities for their applications. As long as they’re not impacting anyone else, the choice should be up to them.
Network Function Virtualisation (NFV) was one of the key benefits that stimulated a lot of interest in Software Defined Networking (SDN) when it first launched.
There was genuine excitement in the industry in taking the virtualisation principles pioneered in the data centres and applying them to the equipment we delivered on a customer site.
However, there were some big challenges associated with NFV, mostly around the universal Customer Premises Equipment (uCPE). The horsepower needed to support the diverse workloads increased, as did the equipment costs. The amount of power needed also had a negative impact on the environment.
Essentially, NFV running on the customer premises turned out not to be as scalable as everyone hoped. But edge is the answer.
In this context, edge means the edge of the network. We can deliver services in our global Points of Presence that have the elastic scalability we could never get on the customer premises. This removes the key blocker to delivering NFV and makes it a reality.
The biggest use case for NFV at the edge is likely to be the Secure Access Service Edge, or SASE.
This delivers a series of security functions working in concert with the network. These combine to mean that customers can remove large scale investments from their own data centres and offices and shift them to OpEx pricing.
It also optimises the traffic flow and imposes a common security framework across all locations, improving the experience for the users.
Learn more about how to optimise your network for the future.
Explore the possibilities a Gigabit Society can bring to your business. Receive a monthly digest straight to your inbox.
Around the globe, our network reaches 184 countries.
We provide the underlying transport network, the virtual overlay, and the platform to prioritise everything.