Artificial Intelligence has penetrated the capillaries of our digital society, and everyone sees new opportunities and possibilities. At the same time, there is a lot of discussion about the threats and dangers that AI entails. The European Union has therefore taken the lead worldwide and is introducing new regulations - such as the AI Act and the Digital Services Act - to ensure that the fundamental rights of people are also guaranteed in the digital world. In concrete terms, this means e.g. that public and private organisations must be open about the algorithms that are used and how they work. The ‘magic black box’ of AI must be replaced by a ‘glass box’ which is transparent for everyone. Increasingly, this movement is called ‘human-centred AI’.
Recently, I have been asked frequently what this ‘human-centred AI’ means exactly, if it is really important or just another hype, and if it is technically possible for AI to be transparent? My answer: it is mega important, it is a hype and at Y.digital we are fully engaged in thinking it through in the design of our AI solutions. Let me elaborate a bit on this.
The architecture of AI
You can look at AI from different perspectives. The definition of the European Commission looks at it from an operational point of view: “all systems that demonstrate intelligent behaviour by analysing their environment and - with a certain degree of independence - taking action to achieve specific goals”. The AI Act focusses more on risk and use, for example, AI-applications that are used for ‘social scoring’ fall into the category of ‘unacceptable risk’. And from a technical perspective, AI is a colourful collection of concepts, methods, techniques, modules and technologies, each with its own typical characteristics and functioning. To realise a specific AI-application, you select multiple elements, assemble the application, connect the data sources and start training. In fact, AI is not magic: AI applications are assembled and configured by following solution architecture patterns, although this might not be recognisable for outsiders. The art of architecting - as always - is to design an application that fulfils all desired functionalities but also complies with national and EU regulations . But when exactly is an AI application ‘human-centric’? Publications on this subject often mention different topics, but in our view, it revolves around four characteristics.
1. Supporting people
With traditional IT systems, only part of the work can be automated, and what remains is manually processed by humans. AI has the potential to go much further, but this digitisation should not be a goal in itself. Human-centred AI puts a specific focuson supporting customers and employees in such a way that they have more time and opportunity for human contact, self-development and non-trivial things that matter. For example: genuine human interest having a personal conversation. Ensuring that nurses’ hands are free to care for patients. Increasing self-reliance without losing the human touch. Having time to deal with complex cases instead of boring repetitive work. Being able to do justice to someone’s specific circumstances when making a government decision. At Y.digital, we call this ‘empowering humans’. For us, human-centred AI means using AI applications to support people, while people themselves remaining in control at all times.
2. The human as a blueprint
A specific current in AI tries to make algorithms that - based on studies of the human brain - mimic how people learn. These algorithms are not given any rules but learn from rewards and punishments. An example is the company DeepMind, which has beaten the world champion GO with this approach. However, human-centred AI does not necessarily imply to go this far. We use humans as a source of inspiration. Our goal is to study the things that people are good at, to find out what elements in the human body play a role, and then to give these elements a place in the architecture of our AI applications. Without pretending to be able to replace humans completely. At Y.digital, we call this “the human as a blueprint”. The architecture of our platform Ally contains modules and functions that have a recognisable human counterpart, such as short- and long-term memory, a knack for languages, communication skills, various ways of learning, knowledge libraries, a knowledge processor, etc. And we focus on AI concepts and technology that fit in well with this, such as natural language processing, machine learning and knowledge graphs.
3. Advancing humanity and society
One step beyond “human AI” is “humane AI”, aimed at helping mankind meet the challenges we all face: improving the quality of life, nature and environment, climate, healthcare, security, food supply. AI can make a huge contribution to these issues if we manage to find new ways to deal with the main barriers, such as funding, commercial revenue models and access to data. But contributing to society can also be done through small initiatives: at Y.digital for example, we apply our AI expertise to the social initiative Teach the Future, which aims to make children think about the future, and thus actively stimulating them to shape their dreams.
4. Contribute to fundamental rights, rule of law and democracy
The most frequently mentioned risk of AI is that it cannot be clearly explained how algorithms work, especially in the case of self-learning. AI applications must therefore comply with increasingly strict national and EU regulations. But you can also turn this around: these regulations are intended to guarantee the fundamental rights of people in the digital world, which is what we all should want, and therefore human-centred AI must fulfil these fundamental rights ‘by design’. The EU demands the functioning of an AI application to be transparent, traceable, and explainable. The good news is: there is much more possible than most people think! At Y.digital, we have been researching possible architecture designs to meet these fundamental requirements. Our employees have fundamental knowledge of AI concepts, many of them with a PhD in relevant AI fields, and they work closely together with legal experts. We are continuously adding new functions to our AI-platform, Ally, which was designed from the start to comply to these regulations and ethical aspects. For example, we developed modules that contribute to traceability in the legal domain, such as the use of knowledge graphs (with legal concepts, references to laws and regulations, based on legal analyses) and audit logs of all processing.
Ten years ago, most people thought that ‘privacy’ was an outdated concept on the Internet. But look how this has changed in recent years, with our EU leading the way! Given the amount of interest in ‘human-centred AI’, it is safe to say that the hype is there. It is already commercially interesting to be at the forefront of this hype and develop AI applications that are compliant to the new regulations. So a hype? Definitely. Will it pass? Yes, but that will take some time. This hype is about something very important, about imbedding the essential fundamental rights of citizens in the digital world, about the most important norms and values that we share in Europe. This really must be designed correctly in AI applications. Is that feasible? A lot is already possible using exiting AI-concepts and technologies, and this will certainly develop further in the coming years. Feel free to contact us for more information: we are fully devoted to contribute to the hype of human-centred AI.