Follow us:
Join the conversation:
The second of three episodes recorded at the AI and Big Data Expo, that we produced in partnership with Exfluency, the AI driven translation and localisation system. The event took place at the RAI in Amsterdam on 26th-27th September . We recorded a series of interviews from the Exfluency booth with a number of the speakers and attendees at the event. Our guests for this episode were:
Shahin explained his insights of how IKEA works internally as a large franchisee company. He works for the INGKA Group which is the retail angle of IKEA and INGKA Group essentially runs IKEA in 32 countries however there are more than 370 stores across the world. Shahin talked about his specific role within the INGKA Group which is INGKA Digital, therefore he looks at every interaction that happens between IKEA and their customers so that customers can explore the large range at IKEA online.
Shahin then goes on to describe what the IKEA vision is. He said that it is to create a better life for the many and he said that if you actually work for IKEA you see this everyday. He talked about how employees at IKEA take inspiration from this vision. Shahin said that he creates a better life for people through data capabilities and foundations that are scalable so IKEA can offer those IKEA products for the many and basically run operations and data analytics with as low as technical depth as possible.
Shahin communicated how they create the memorable experience of shopping physically, online. He said that recently the company had celebrated its 80th Birthday and expressed how that meant that 80 years of colleagues and co-workers have been working to perfect that really great IKEA experience, which is really memorable. He said that they rely on traditional strengths that have been created over time and see how they can use these strengths and make it available for IKEA customers online depending on different online channels. He expressed a large responsibility because IKEA has 4 billion people visiting their online channels every year and many of their customers that go to the stores, which is roughly around 700 million customers every year.
Shahin then explained that they divide their recommendation of product to customer online, one is product recommendations, which is essentially typical recommendations that you have on any, retail website so when a website offers substitutes or things that will go with it. And he then explained a four-stage recommender model which basically is a state of the art and lots of companies use it to move on from recommendation algorithms to recommendation systems. And the final recommendations system IKEA use is a vision of how IKEA products would look in your house or how you can typically put them together with the aim to inspire and boost the customer shopping experience. Shahin said that his current team used to be called inspirational feed because that was the only goal that it had, is to make sure that it inspires the customers. Shahin then talked about the ethical AI team at IKEA, that ensure they are using the right tooling and going about things the right way to make sure they are compliant.
Maarten explained the Spice Girl Gap which was a reference to the relationship he has with his dad, who was a big Beatles fan, and he is from the 90s, so he was a Spice Girls and Backstreet Boys fan, and one of the things his dad told him is okay, but is this music you’re going to ever listen to when you’re older? Because the Beatles are great, the Rolling Stones are great, but how is it with the Spice Girls? And then he had to present in Cologne about two years ago and there was a 90s revival party. And this gift gave him the insight that it is a matter of perspective on the things you are viewing now, if it makes sense to you or not. He explained that at a company like Schiphol or the Dutch railways where employees are older and the people working with data are more 2030, it is the exact age difference between the Beatles and the Spice Girls. So this is the age gap Maarten is trying to conquer.
Maarten expressed some of the key points of his presentation which included that when making a data product it must look nice and there must be planning to ensure there is improved processes so it is all about making the better decision. Maarten talked about how data literacy is not as high everywhere and Maarten is keen that data
Maarten expressed that AI image recognition case of the planes deep turnaround which they are looking to sell to other airports us an exciting use for AI within Schiphol airport. Schiphol also implement time slots for people to reduce ques and reduce extreme peaks that’s the airport can’t handle. Maarten mentioned that AI can cause security issues from the outside and the risk of AI taking over, which the big people at Google talk about and as a response Schiphol airport make sure they put a human in the lead. Despite working towards autonomous airports in 2050, Schiphol still has at the moment always a human making the final decision. So, the AI gives the advice, the human makes the decision.
He then explains the term autonomous airport as self driving buses, autonomous baggage system where people are not involved anymore and the people that were in those roles are given better jobs where they can really empower themselves and use their minds of getting polluted by an airport.
Hana explained the meaning of natural language processing as the technology that powers the chatbots, voice bots, etc. Without going into technical details, this is the main application basically. Hana then described a real life case study to understand how NLP works in real life. She said the application began at the time of the Ukrainian war so the situation that the Czech republic was in where a lot of people were coming in and there was not enough volunteers to distribute the refugees to available accommodations. The team from Deloitte used the voice bot technology to actually connect the refugees to collect the information from them, the demand basically and the information from the public about the offer or supply of the accommodation and basically processed it in the meantime. But what is important is that what was added on top of that was the automation, which kind of included the loop to constantly check on the availability and get the information to the Red Cross and the refugees. She then explains how the live translation as a part of the technology made it possible to connect people that wouldn’t be able to connect otherwise
Hana said the wonderful part of the technology recently, it’s affordable and scalable fast. So, it’s a perfect technology for, fast response solutions. So, a reaction to crisis like Turkey, Morocco recently, but even local disasters like flooding, etc. But Covid showed us the need for addressing the mass audience and stuff. So, this technology, especially in NGO, in public sector, has really wonderful use, of course, in businesses as well.
Hana talked about how an SME could use this technology. She expressed that small and medium-sized businesses are usually asking, how can we start our automation and AI journey? What is the first use case we could possibly think of? And especially because the affordability and scalability of NLP solutions, voice bots, chatbots, etc. She said it is a really nice piloting use case to start with, even if you are low on budget.
Marc discussed the ethical implications if ChatGPT in the tech industry. He said there are several implications which come from different ethical perspectives. One highlights the pluses and minuses of such an application. One highlights the duties and rights. One highlights how such an application can have an influence on how we communicate with each other. And one has to do with virtues that we can develop or that are corroded by such an application such as ChatGPT.
Marc then explained the motivation behind writing his recent book about ethics for people who work in tech he said that he works with a lot of clients and with partners, and they have a motivation to include ethics in their projects, but they find it difficult. So he said he created this book to step in at that stage by giving them vocabulary, a structure, and even a workshop format and a canvas to in a very practical manner, integrate ethics in their projects.
Marc said the core of the book are these four ethical perspectives. Which he highlighted with examples, the pluses and minuses – have the example of self-driving cars where the pluses for the driver of such a car are different from the effects, the minuses sometimes for the pedestrians on the street. The perspective of duty, ethics that deals with duties and rights, illustrate that with the example of surveillance cameras in the city. So on the one hand, the city has the obligation to protect its citizens, to promote safety. On the other hand, the citizens have a right to privacy. The third perspective is relational ethics. And there, the example of IoT devices, smart speakers and the like in the home, and how that influence how we relate to each other, communicate with each other. Fourth perspective is virtue, ethics and which highlight with social media app. And typically such an app is designed in such a way that it corrodes your ability, your virtue of self-control, and you can think of different designs for such an app that instead help you to cultivate self-control and other virtues like justice or honesty.
Marc considered the key ethical considerations for people who work in the tech industry. He said that the themes that recur often are privacy, the use of data. Another one is bias or equality or fairness or non-discrimination. And the third one is often transparency or explainability or explicability. These often are at play with algorithms and AI systems. At TNO the easiest way is to have a rather small 5 or 6 people, one, two-hour workshop that guides you through these four ethical perspectives, so that you can start small and quickly which he called rapid ethical deliberation.
Jaromir talked about the technology that underpins the platform. He said Exfluency is a tech company providing hybrid intelligence solutions for multilingual communication. By harnessing AI and blockchain technology, Exfluency provide tech-savvy companies with access to modern language tools. So Exfluency’s goal is to make linguistic assets as precious as any other corporate asset. And in order to get there, of course, it took several iterations. So, underneath technology is not like all of these ChatGPT’s of these worlds just appeared suddenly in November last year. It’s been several iterations until they got to where they are right now. So, for example, ten years ago he was a co-founder and also, had a couple of patents in the field of machine learning and semantic processing. So again, that’s the evolution of what was happening before that. Even with something that you are more used to like Facebook, for example, smartphones, there were plenty of Facebook’s before Facebook. There were also plenty of phones before iPhone in 2007 or was it 2008? So of course it was a lot of iterations before the world became what it is right now.
Jaromir said now compared to the start of the iterations there is much more processing power, they have GPUs, algorithms and transformers that give really great results. There current technology will give your specific answers to your questions rather than getting generic answers, it is the source of the true rather than the random. And that is possible because of thousands, millions of engineering hours and patents and a lot of IPs that produce behind the stages.
Jaromir then talked about his personal background, for the last 20 years he has been doing stuff around hardware and software and machine learning and what he called semantic networks, semantic search and semantic text processing. He is a happy cofounder and also co-author of several patents, and a couple of companies around semantic processing. One of the companies he sold in Silicon Valley back then in 2012. He did an exit to the large social network, then 300 million of users that allowed him to build up the scale. And then that led to even more interesting solutions in the field, when the processing power was capable of doing as much as he could demand out of the algorithm. He expressed his happiness to work with smart guys much smarter than him to develop such algorithms and solutions. Jaromir talked about the exciting future of AI and for him that included a piece of hardware in your ear that was processing those events and triggering some actions due to personalised data, personalised text, and events processing.
Timea discussed a panel she was going to be speaking on later that afternoon to talk about the bottom line which is all industries are all facing the same challenges of real time analytics and real time business intelligence. What are the foundations and what are the processes and systems companies have to have in place to move along? And it’s not just data warehouse, data cleaning, data collection, it’s also the human part of it. Timea then explained that the human part of it is data literacy which she said that in her industry is a huge challenge as they are always lacking people.
She said that she has her own unorthodox than the traditional way of approaching data analytics at Boehringer. The traditional way would be we have to have everything in place, good data collection, data quality, feedback system and starting with cloud computing going forward to advanced analytics, this is the by the book and by not the book is what is my flavour to it is the human side and also the community creating a well put together community, supporting my colleagues and elevating the data literacy.
Timea said that since September she is an enabler of Snowflake and Tableau in Boehringer Ingelheim. Therefore, she created a community and every 2 or 3 weeks they have a session, how to bring the level up, the data literacy in data warehousing, and also in Tableau and reporting. Timea expressed her happiness for being selected as a Tableau Ambassador. Timea then talked about the challenges she is facing then one of the main challenges from the human side is data literacy that, as she mentioned before, there is a wide range of knowledge within these eight countries. One of her countries can actually code even in Python and SQL. The other country doesn’t know what SQL syntax is about. So there is this huge wide range and she’s trying to bring everybody on the same level. And the other challenge is because her company is a traditional old-school company, pharma companies are very late in digital transformation. So finally this year Boehringer Ingelheim are about to welcome 2023, the 21st century in Boehringer Ingelheim as well with cloud computing. Timea talked about her deadline that she has next year which is one of her biggest projects that was in Boehringer so far in data analytics is moving data warehouse.
Damian explained edge AI which basically means that people, run compute where the data is being generated, so for example, normally when you use ChatGPT, you type in a prompt. It’s being sent off to like a remote server and then the compute happens on the server. But for example, if edge AI is taking a picture with your iPhone and you have nice algorithms which make your image nicer before you take the picture, this happens on the device and therefore closer to you as a user, and this will be defined as Edge AI. So, Edge AI is really helpful when you want to decrease your latency, decrease your costs, and increase accessibility. He said for example, if you want to run computer vision solution on a robot, it doesn’t make sense for data to go to some kind of server room and back. Everything should happen on the device, on the robot, right? To reduce the latency and to make sure that the robot can still see if you don’t have Wi-Fi access.
Damian explained some of the key points he made in presentation earlier that day, he said his main point was there are a lot of businesses that struggle with deploying AI solutions because they need to go through this whole process of getting hardware ready. And by default, what many companies do is they try to deploy on GPUs. And it’s difficult, like the companies should not care about building GPU cluster. They should not care about the specialized hardware accelerators. You should only care about their core business. And this is what Neural magic enable them to abstract out all the hardware considerations and Neural magic allow them to run their models seamlessly on commodity hardware, which is CPUs readily available always at our fingertips.
Damian then explained what sort of problems they are trying to solve at Neural magic which include allowing clients to focus more on their core business and then Neural magic can set up their hardware accelerators, so Neural Magic abstract out all the hardware problems for their clients and enables them to run the state-of-the-art deep learning models on commodity hardware anywhere they want. So they can run on CPUs at any location, be it edge device, be it cloud or their private data centre.
Damian talked about a live example including a big American retailer as well as one of the leading auto manufacturers. But recently Neural Magic has been working with a start-up from the UK called Uhura. Neural magic helps them to enable their customers to run the deep learning solutions on the commodity hardware. Neural magic is excited for the future because they are enabling their customers to deploy on compact edge devices and they are able to deploy large language models, which are obviously the hottest thing right now. They are very useful and they’re problematic to deploy on their edge devices because they are so big. But they already know that they can compress those models by, 2X, 4X, maybe even 8X and be able to deploy those models on those really tiny and compact devices.