Blogs

Reflections on Web Summit 2023

We catch up with Andy Hall, our Chief Product Officer about his trip to Web Summit 2023, all things AI and what this means for the future.

By Andrew Hall

Last week I attended Web Summit 2023 in Lisbon, where we enjoyed one of the warmest Novembers on record, but it wasn't just the temperature that stood out; attendance reached a remarkable 70,000 as technologists, business leaders, investors, politicians and the media gathered to talk, listen and present on tech.

So unsurprisingly, AI (or artificial intelligence) was all pervasive, from the keynote talks to the chilli stand outside.

- It was delicious, in case you were wondering.

Over the past 10 days I’ve taken some time to reflect on not only what I saw and heard at the conference, but also the years since voice assistants landed in our homes, semi-autonomous cars appeared on our roads and ChatGPT started grabbing headlines.

Having worked as a technologist in a number of regulated industries, it’s been fascinating to observe how other sectors have begun to be disrupted by these innovations whilst technology companies continue to promise (and only occasionally deliver!) transformative AI solutions to the life sciences and healthcare spaces.

Is it all just hype that will fizzle out?

There’s no doubt that there is a lot of hype around AI, but that doesn’t mean the impact won’t be significant once it stops being the centre of attention.

(Healthy) sceptics are pointing out the limitations of these tools. For example, large language model chatbots like ChatGPT have been known to hallucinate (generate false information), can’t cite their sources and their answers can be months or years out of date, depending on when information was loaded into them.

- Douwe Kiela, CEO of Contextual AI, presenting on the limitations on enterprise AI adoption, and what the future holds for the next generation of technologies.

In fields such as scientific research and medical writing, the risk of even the smallest inaccuracy or ambiguity creeping into a piece of communication is to be avoided at all costs. So institutions and organizations are rightly cautious, given that the consequences of miscommunication in these fields can range from embarrassing, to costly, to potentially dangerous for people’s health and wellbeing.

At Web Summit I got a first-hand perspective on the current pace and trajectory of the development of enterprise AI solutions, I left the conference feeling highly confident that the industry will overcome these limitations in the next couple of years.

We can expect to see reliable and capable AI research assistants both in and out of the workplace, countering misinformation and accelerating the entire knowledge sector.

What does AI mean for our future?

Based on the past few decades of science fiction films, TV and books, many are understandably anxious. Will there be mass unemployment, are our civil liberties at risk, will our machines develop sentience and ultimately follow their own agenda?

My own view is simply that these new tools will just continue getting smarter. Ironically, this is nothing new. From clay tablets to pen and paper, from the slide rule to the calculator, from the telephone to the smartphone, we have always have been a “toolmaking species”.

Dr Adam Poulston, a colleague of mine who is an expert in this field, recently observed that many of the techniques and algorithms underpinning the latest wave of generative AI and large language models have in fact been around for decades. It’s only recently that we’ve developed enough computational resources, and gathered enough data in one place (the cloud), to be able to put these to work in a truly useful way.

To me, this is one of several parallels between this revolution and the industrial revolution that swept over the world over 200 years ago. The first steam engine was invented in the early 1700s, but it wasn’t until developments in metallurgy, explosives and other complementary fields that all of these factors could, together, revolutionise transport and manufacturing.

The industrial revolution had profound implications, both good and bad. For some, living standards dramatically improved as the cost of goods and services dropped. Many jobs were created, but a large proportion of these involved dangerous working conditions. Poverty and hardship were rife. This revolution ultimately culminated in the industrialisation of the military and was followed by two world wars that cost millions of lives.

This mixed vision of the future is, I think, more realistic than that depicted in science fiction. AI will simply give us newer and more potent tools to continue doing the things that we humans have always done, both good and bad.

What does AI mean for me today?

With each month of breakneck progress, and the levels of investment and attention that the sector is receiving globally, it’s impossible to deny that AI will be transformative.

The question, whether you’re a business leader, a consumer or even a developer of these next-generation tools, is – what you should do about it today?

In reality, it’s quite simple. As with any business decision, we need to be prudent and responsible.

We should always be asking:

  1. What is the expected benefit of the solution?
  2. How can we measure, empirically, whether we are realising this benefit?
  3. How do we mitigate against errors and failures?

This was perfectly illustrated by one of the talks at WebSummit’s “Fourth Estate” stage, that focused on the media.

From the Managing Editor of Channel 4 News, Ed Fraser, it is clear that news teams are increasingly overwhelmed with the sheer number of potential stories that can be investigated and covered. Furthermore, it’s no secret that news organizations across the world are seeing increasing pressure on their revenues, as well as competition from social media and new entrants to the industry.

For Fraser, Channel 4 have already seen benefits, both in terms of finding new stories: “We’ve already combined multiple datasets to uncover stories we might otherwise have missed.”; and for saving journalists time: “We use AI for transcription of interviews, we also have a tool to help us with spelling mistakes, and we’re leveraging AI to help journalists with other mundane tasks.”

This, however is counterbalanced by the need for verification: “You have to be super open but also super cynical – AI needs massive supervision and content origin should always be traceable.”

Athan Stephanopoulos, Chief Digital Officer at CNN summed up both the opportunities and challenges very succinctly: “90% of the content of the internet within 5 years will have some element of AI in them. Media companies need to set the standards more than ever before and AI raises the bar. But there are also opportunities and trusted news sources should rise to the top. […] Will AI replace journalists? No, but journalists who use AI will replace journalists who don’t.”

How do we do it at VISFO?

Since VISFO was founded, we’ve been combining a diverse mix of skills to deliver solutions and products that answer our customers’ strategic questions.

As a team of scientists, engineers and designers, the cycle of hypothesising, testing, validating and improving is second nature to us, whether this is during primary or secondary research, the development of new methods or the creation of new tools.

Likewise, verification and fact checking is built into our research processes, and we’ve been evaluating and testing various in-house and third-party digital tools to deliver more comprehensive answers, in shorter timeframes, that remain robust.

We have carefully avoided jumping on the AI bandwagon until we could confidently, and responsibly, forecast how the next generation of digital tools may become part of our commercial offering.

We are and will remain committed to our values of scientific and commercial integrity, and I am confident that our rigorous development processes will keep us true as we build our own set of generative AI tools for biomedical and healthcare research, medical affairs and strategic intelligence.

Blog author(s)

Andrew Hall

Chief Product Officer