Sometimes we forget the humans behind the tech in our ever busy world. DSF is fortunate enough to know some incredible tech leaders across the world and has the privilege of hearing them present at our events. That being said, our Speaker Spotlight sets the stage to get to know our speakers on a more personal level and connect them with our growing community. Read the mini interview below!

A bit about Martin:

I am originally from Sweden where I studied Engineering Physics and did my PhD in Signal Processing. My research was about special regression problems and algorithms for them, similar to the LASSO estimator. Afterwards, I was recruited to a company in London where I worked for about 3 years before I joined LYTT as a data scientist. At LYTT I am working on using sensor data from distributed fiber optics to predict a wide range of phenomena. Distributed fiber optics have been around for some time, but has recently started to be used to monitor things ranging from structural integrity of space vehicles, roads and bridges to monitoring glaciers, whales and even beetles.

How did you start out in your tech career?

It was first when I was PhD student in signal processing that I heard of something called a “data scientist”. I was learning python at the time, so I found the idea very interesting. Some companies were hiring data scientists at the time, but everyone was looking for unicorns that were expected to know everything about everything. It wasn’t until some years later that the market matured and the job role became more clearly defined. I still think the term “data scientist” is poorly defined. Some companies need data analysts, but no data engineers. Other companies need data engineers, but no data analysts. Unfortunately it often gets grouped together under data science.
One disadvantage of transitioning to data science from a technical field, like signal processing, is that the technical fields are often theory oriented and mathematical results are often valued more than practical applications. This is different in data science where you really need to listen to what the data tells you.
At LYTT was actually the first time I worked as a data scientist. Working with sensor data is a bit different from working with e.g. retail sales data. For example, different rows are seldom independent and there is sometimes difficult to understand what inputs mean. On the other hand, the data rarely change over time and a stable prediction often remains stable for a very long time. The work problem we face are diverse and we get to build everything from rule-based heuristics to deep neural networks.

What are the signs of success in your field?

I think the best teams in data science are teams that let the data speak for itself, provides real business value and uses the right technology. You let the data speak for itself when you don’t make guesses and instead looks at what the data can tell you and if you have the right data. It is easy to think that you can reason yourself to the solution or you have read somewhere that “X is better than Y”, but if you don’t validate if the assumptions and conclusions have support in the data, you are really guessing. You provide real business value when you talk to product managers, sales and other teams to make sure you solve a relevant problem.

What is the best and worst thing about your job role?

The best thing is when you can use data to make a real difference for the team, the company or the client. When you are able to present just enough proof to make the customer or stakeholder connect the dots for themselves to understand how your analysis can add value to their operations.
The worst part is when people think machine learning models are black boxes and refuses to believe them even when they work well. They are stuck with their less performant solutions and miss out on a lot of the value they could get from a more data driven approach.

What can you advise someone just starting out to be successful?

The best advice I have heard was from the writer Neil Gaiman who said “To be a writer, you need to write. And you need to finish things.” I think the same is true in tech. To learn tech you just need to do it. The problem is to find out what level you are on. If you have completed a tutorial, try to make a small change to the project, then another small change and so on. If you try to do too much, analyse what knowledge you are missing and try to learn it with other small projects. Keep experimenting and keep learning.
For data science, specifically, I think that the best way to learn is to work with real world datasets, explore the data and write a report with very good looking figures. You might not win any Kaggle competitions that way, but working with real data gives invaluable experience. Also read other reports to find out what good looks like.

How do you switch off?

Away from the screen with a good book.

What advice would you give your younger self?

I would give the same advice as I mentioned above.

What is next for you?

I am working on learning Redis and Rust right now. Data modelling in Redis is quite different from SQL and makes you rethink a lot of concepts such as joins. Rust works without a garbage collector, so it changes how you write programs. Both technologies constrains what you can do, so you need to rethink many problems and find new approaches to solve them.

If you could do another job now, what would you do? Why?

I would also want to know the answer to that question since I really have no idea.

What are your top 5 predictions in tech for the next 10 years?

  1. More widespread use of LLMs in applications. The consequences will be easier interfaces and better interactivity with data and ML systems, but also the realisation of what overreliance on LLMs can lead to, rise of “fake data” generate by bots and the rise of models that are partially trained on the outputs of older models and gradually starts to drift more and more from reality.
  2. More and better tools for out of the box solutions for data lakes, data lake houses, ETL pipelines and model serving will make a lot of data work easier to solve. You will still need data engineers and data scientists, but the new tools can automate a lot of routine work.
  3. I believe that we can get by with less than we think of the MLOps ecosystem. In time, I think more will realise which parts are actually necessary for good model serving and under what circumstances.
  4. Education and sharing in tech has grown steadily for a long time. I believe that LLMs and graph data modelling will make it possible for everyone to get a teacher that knows you personally and helps you learn what you need.
  5. Most of my predictions are wrong. Predictions are hard to make, especially about the future.

Watch Martin’s session at the DSF Summer School here.

Thank you to all our wonderful speakers for taking part in our Speaker Spotlight!

Want to become a DSF Speaker? Apply here!