BAINSA at the World AI Cannes Festival

BAINSA was in Cannes for the WAICF’s second edition in early February 2023! Our team had the opportunity to attend conferences by speakers such as Yann Lecun, Stuart Russell, and many others and learn about the latest innovations in the AI industry. 

The following are summaries of some of the conferences that the BAINSA team attended at WAICF 2023. Learn more about BAINSA at WAICF here.

 

Towards AI systems that can learn, reason, and plan – 10/02/2023

At WAICF 2023 Yann LeCun, Vice President and Chief AI Scientist at Meta, discussed the potential for AI systems to learn, reason and plan. He described how AI systems can recognize images, localize and estimate the pose of each human, and perform panoptic segmentation. In addition, he mentioned a collaboration between NYU and FAIR to produce high-quality MRI images, as well as the work done by some of his colleagues at FAIR in collaboration with NovaSkin in France to attempt to correlate brain activity with neural nets that have been trained to understand language or recognize images. 

AI hastens the advancement of science, especially in biochemistry and physics, it has enabled huge advancements in protein folding and high energy physics. The Open Catalyst Project is also an intriguing effort to use machine learning to predict the properties of specific compounds or materials, such as energy storage, in order to develop new catalysts that facilitate the separation of oxygen and hydrogen in water.

With systems like stable diffusion, and large language models, the potential for employing AI is enormous. These systems utilize autoregressive architecture, which accepts a sequence of tokens as input and trains them to predict the next token. In the field of text production, Google’s Transformer architecture, which emerged a few years prior, has been completely transformative. Blenderbot and Galactica from FAIR, Lambda and BART from Google, Chinchilla from DeepMind, and ChatGPT from OpenAI are revolutionary for content moderation, language translation, and hate speech detection. Typing and writing can be facilitated by autoregressive models, such as predictive keyboards. However, they are prone to some errors and have significant limitations. 

Recent work at FAIR introduced an autoregressive model called “Make a Scene”, which enables users to type a descriptive text of an image and provide a sketch of what they desire with labels of its elements. The system produces an image of the result after a few seconds. This is an example of an AI model being used for computer-generated images, however, deep learning is used for numerous applications, including translation, speech recognition, speech production, question answering, retrieval, ranking, and the moderation of online content.

Denoising autoencoder is a technique used for multilingual text comprehension, mathematical derivation, symbolic manipulation of mathematics, and dialogue systems, amongst many other. The “No Language Left Behind” program from Meta is capable of translating 200 languages in 4,000 directions, and it can do so being trained on a small number of parallel texts. 

The next challenge, according to LeCun, are intelligent virtual assistants that will assist us in our daily lives. We currently lack systems intelligent enough to play this role, yet. We require autonomous AI, systems that can achieve human-level AI, or something comparable to AI with the intelligence level of a cat or dog. 

Currently we are working in majority with reinforcement learning, in which a system experiment and self-correct by “learning” from its own errors. LeCun pointed out, that the process works in practice – citing video games – but not in the real world. The challenge for AI over the next few years is to learn predictive models of the world, which will enable the machine to predict the outcomes of its actions, and then learn to reason. This is comparable to self-supervised learning, in which one trains themselves to predict what they cannot perceive at the moment.

LeCun proposed a global cognitive architecture that could help us gain a deeper comprehension of the world. This architecture is founded on a mental model of the environment that the system is intended to interact with or control. The behaviour of the system is determined by the cost module, whose sole purpose is to minimize costs. To accomplish this, the system imagines a series of actions and the resulting effect on the world using its internal world model. This is known as model-productive control, and it employs a very old concept from optimal control theory. 

The speech concluded with LeCun emphasizing the potential of AI to transform various domains such as healthcare, education, and environmental sustainability, all the while factoring in the ethical and social challenges of AI (privacy, bias, job displacement). LeCun has no doubt, however, that we can overcome these challenges and create a future where AI is truly integrated into our lives.

Author: Csilla Lelle Janky

The Science and Practice of Responsible AI – 10/02/2023

The issue of responsible AI has become a hot topic in recent years, with many experts expressing concerns about the potential negative impacts of AI on society. It was a topic of discussion at WAICF 2023, and in particular Michael Kearns, a leading expert in computational learning theory and algorithmic game theory, gave a speech about the importance of responsible AI, highlighting the need for ethical and socially responsible approaches to developing AI systems.

Kearns discussed the science and practice of responsible AI, both in the scientific community and in the private sector. He explained that traditionally machine learning models are trained for accuracy alone, but these days there are other properties we might want of a model other than strict predictive accuracy. To get these other properties, we may need to take explicit action in the training workflow in order to enforce them, such as auditing our training models to see whether they have demographic bias or not, collect more and better data on particular demographic groups, and gather new features. It is important to explicitly train machine learning models with particular goals in mind, such as avoiding behaviours like private security violations. 

Kearns argued that in order to do this, we need to use group fairness definitions, which specify what demographic group or attributes we are concerned about protecting. We also need to make quantitative, mathematically specifiable definitions of fairness for this particular application, such as making sure that the rate of false rejection is equal or at least approximately equal between these groups. 

We need to change the way that we train models to take fairness into consideration. In particular, we need to examine the trade-off between accuracy and fairness. In a perfect world, we would have zero error and zero unfairness, but instead, we have a trade-off between an efficient frontier and an expensive frontier. Kearns pointed out, that visualizing this problem is important, as it provides a good interface between technical people in machine learning and stakeholders, policy makers and regulators, to make sensible choices about what a palatable trade-off might be.

The science of privacy has gone through a plethora of changes in the last 20 years. Prior to this, anonymization techniques were the predominant methods of giving some kind of privacy assurances in a data set. Differential privacy is a new notion of privacy that has rigorous guarantees, it uses noise to balances two competing goals: giving privacy guarantees and making computations more useful. There is, however, a trade-off between privacy and accuracy, the stronger the privacy guarantee, the less useful the computation. 

It is very important to keep an eye on downstream uses of AI, too, because when a machine learning service or model is provided on a commercial API, it is often integrated into some other application. This can lead to problems about performance as well as fairness downstream. To address this, research is being conducted to design ways of training models upstream so that they are robust to a wide amount of variation.

As we are starting to deploy AI in various disciplines, robustness, explainability and responsibility are becoming the centre of discussion. It is a relatively new area, but by bringing together experts from different fields and fostering ongoing research and development, we can ensure that AI is developed in a way that is both technically sound and socially responsible.

Author: Csilla Lelle Janky

AI Adoption – 10/02/2023

The World AI Cannes Festival hosted on Friday, February 10 the conference  “AI Adoption”, where three leading experts in the field discussed their thoughts and experiences on the topic. The conference was moderated by Giovanni Landi, Vice-President and Expert of Istituto EuropIA.it, who added an interesting philosophical perspective to the debates.

The first speaker, Mara Pometti, Associate Design Director of McKinsey & Company, discussed the importance of putting humans at the center of AI. In her presentation, she emphasized the need for organizations to cultivate an “army of data-savvy humanists.” Pometti believes that data and analytics leaders should adopt a human-centered mindset, rather than a technology-centric one, in order to create AI solutions that truly serve the needs of people. She also stressed the importance of transforming responsible AI rhetoric into solid design principles.

Next, Antonio Iavarone, Head of Digital Innovation Office at Iren, presented a concrete case study of Iren Mercato, which was divided into five steps: customer centricity, business cases, trial phase, adoption, and changing the rules. Iavarone highlighted the importance of involving customers in the AI adoption process and considering their needs and preferences. He also emphasized the need to develop business cases for AI adoption and to thoroughly test the technology before full-scale implementation.

Finally, Massimo Chiriatti, Chief Technical & Innovation Officer of Lenovo, presented a scheme in which there are two systems of perception of the world. The first system,  is data-driven, while the second system represents the human being and relates to the meaning of the world. In particular, the second system is presented as a consequence of the first one through Artificial Intelligence, which provides human beings with a balance of action, evidence, and content to ensure that it serves the needs of people and organizations.

At the end of the conference, an insightful debate ensued between Pometti and Chiriatti on the question of whether AI should be human-centered or technology-centered. Both speakers put forward compelling arguments, with Pometti emphasizing the importance of putting people first and Chiriatti emphasizing the importance of using AI to drive evidence-based decision-making.

Giovanni Landi, the conference moderator, added a unique perspective to the debate by pointing out that AI is not just a technology or a STEM subject, but that it can also be thought as a humanistic subject rooted in philosophy. He emphasized the importance of considering the ethical and philosophical implications of AI in order to ensure that it is used in a responsible and effective way.

In conclusion, the AI Adoption conference at the World AI Cannes Festival was a valuable opportunity for experts to share their thoughts and experiences on the topic. The debates and discussions highlighted the importance of considering both the technological and humanistic aspects of AI in order to ensure its responsible and effective adoption.

Author: Fabio Pernisi

The Italian AI ecosystem – 9/02/2023
Italian AI ecosystem conference.

Italian companies are increasingly recognizing the value of data and artificial intelligence (AI) for driving business growth and competitiveness. Three Italian professionals, Mauro Tuvo, Simone Grimaldi, and Cristina Zanini, each representing different industries and organizations, shared their experiences and insights on the use of AI in data management, decision-making support for small and medium-sized enterprises (SMEs), and the challenges of incentivizing digitization for industrial businesses.

Mauro Tuvo, an expert in Data and AI Governance, member of the DAMA Scientific Committee, discussed the role of metadata in modern data management, particularly in augmented data management using AI. He emphasized how metadata can support data integration, quality, lineage, and governance, and how a metadata model can describe which companies use which data and how, enabling knowledge semantic graphs and systems to respond to complex queries in a straightforward and accessible manner. He also stressed the need for a metadata model for AI governance, both for regulatory compliance and for supporting data scientists and AI system users in finding and using data and ensuring the accuracy and transparency of models.

Simone Grimaldi, representing Vedrai, an AI-based decision-making support service for SMEs, highlighted the company’s focus on helping SMEs make better decisions and ask critical questions that they may not have considered before. He also stressed the importance of speaking a language that is accessible and understandable for clients. According to him, the Italian AI market has gained lots of recognition despite entering the market later than France and Spain.

Finally, Cristina Zanini, the Director General of Innovation Experience Hub at Confindustria Brescia, discussed the challenges and opportunities of digitalization for industrial businesses. She identified the lack of incentives for SMEs to invest in Industry 4.0 technologies and the little short-term economic benefits of digitization as major obstacles. However, she also recognized the potential value of data sharing and networked industries, citing the example of the ski resort industry, which faces challenges in predicting visitor numbers and can benefit from sharing data with booking agencies and other relevant parties. She also discussed the role of INEXHUB, a network of industry, artisan, and agricultural organizations, in creating business networks and promoting digitization and data sharing.

Overall, these three professionals provide a snapshot of the diverse and dynamic AI landscape in Italy, where experts and organizations are grappling with the challenges and opportunities of data and AI for business growth and competitiveness. Through their insights and experiences, they provide valuable lessons for companies seeking to leverage data and AI for their own growth and success.

Author: Kristian Gjika

The WebCrow Crossword challenge: AI vs Human – 9/02/2023
WebCrow conference.

Games have proven over time to be an ideal launching platform for AI. Often, methods developed to solve a game have been useful in more concrete areas (think of Deepmind, which, after maturing powerful Reinforcement Learning techniques developing AlphaZero and AlphaStar, has used these skills to create models such as AlphaFold and AlphaTensor; great strides for science).

It is in this perspective that the efforts of the University of Siena and OpenAI should be framed, who presented at WAICF 2023 an AI model capable of solving crossword puzzles in Italian, English, and French. To test the model, researchers invited attendees to challenge it in a competition, where accuracy and speed are rewarded. In all three available languages, competing with native human competitors, the model was the winner (with a small margin in American puzzles, the most challenging ones).

But how does WebCrow work? The fundamental idea is to combine various modules, called “Expert modules”, suggesting to the algorithm which words could satisfy a particular definition. The modules perform various types of research, some based on predefined rules, while others have access to the internet, from which they draw valuable information. Once all plausible candidates are collected, WebCrow enters a merging phase where, through a statistical model, it calculates a score for the various words. The last phase is the filling one: WebCrow tries to fill in the crossword using the results found and preferring words with higher scores.

In doing so, the model has achieved excellent results in all the languages it has been trained on. In the future, researchers have announced their intention to develop a single model capable of solving crossword puzzles in a large number of different languages. This is a difficult challenge, because in addition to the obvious linguistic differences, there are also important cultural and setting differences in the puzzles of different countries. Get ready for Webcrow 3.0 at WAICF 2024!

Authors: Mattia Scardecchia, Dario Filatrella

645 Views
Scroll to top
Close