Save Time, Skip Slides: Read the Summary Of the State of AI Report 2022

Save Time, Skip Slides: Read the Summary Of the State of AI Report 2022

This slide deck and notes were summarized by Olu Aganju and the ChatGPT https://beta.openai.com/docs/models/davinci model!

What’s the State of AI Report? 


This report summarizes the most interesting developments in AI and provides an overview of the 2022 State of AI Report, authored by Nathan Beniach and Ian Hogarth, to spark an informed conversation about the state of AI and its implications for the future.

Keep these definitions in mind throughout this article.

Artificial Intelligence (AI): A broad discipline to create intelligent machines, as opposed to the natural intelligence that is demonstrated by humans and animals.

Artificial General Intelligence(AGI): A term used to describe future machines that could match and then exceed the full range of human cognitive ability across all economically valuable tasks. 

AI Safety:A field that studies and attempts to mitigate the catastrophic risks which feature AL could pose to humanity.

Machine Learning (ML) : A subset of AI that often uses statistical techniques to give machines the ability to “learn” from data without being explicitly given instructions for how to do so. This process is known as “training” a “model” using a learning “algorithm” that progressively improves model performance on a specific task.

Reinforcement Learning (RL) : An area of ML in which software agents learn goal-oriented behavior by trial and error in an environment that provides rewards or penalties in response to their actions ( called a “policy”) towards achieving that goal. 

Deep Learning (DL): An area of ML that attempts to mimic the activity in layers of neurons in the brain to learn how to recognize complex patterns in data. The “deep” refers to a large number of layers of neurons in contemporary models that help to learn rich representations of data to achieve better performance gains.

Model: once an ML algorithm has been trained on data, the output of the process is known as the model. This can then be used to make predictions. 

Self-Supervised Learning (SSL): A form of unsupervised learning, where manually labeled data is not needed. Raw data is instead modified in an automated way to create artificial labels to learn from. An example of SSL is learning to complete text by masking random words in a sentence and trying to predict the missing ones. 

Large ( Language Model (LM, LLM): A model trained on textual data. The common use case of LM is text generation.The term “LLM” is used to designate multi-billion parameter LMs, but this is a moving definition. 

Computer Vision (CV): Enabling machines to analyze, understand, and manipulate images and video.

Transformer: A model architecture at the core of most state-of-the-art (SOTA) ML research. It is composed of multiple “attention” layers which learn which parts of the input data are the most important for a given task. Transformers started in language modeling, then expanded into computer vision, audio, and other modalities.

Executive Summary: 

Research

Diffusion models took the computer vision world by storm with impressive text-to-image generation capabilities. 

AI attacks more science problems, ranging from plastic recycling, nuclear fusion reactor control, and natural product discovery.

Scaling laws refocus on data: perhaps model scale is not all that you need. Progress towards a single model to rule them all. 

Community-driven open-sourcing of large models happens at breakneck speed, empowering collectives to compare the large labs. 

Inspired by neuroscience, AI research is starting to look like cognitive science in its approaches. 

Industry

Big Tech companies expand their AI clouds and form large partnerships with A(G)I startups.

Hiring freezes and the disbanding of AI labs precipitates the formation of many startups from giants including Deepmind and OpenAI. 

Major AI drug discovery companies have 18 clinical assets and the first CE mark is awarded for autonomous medical imaging diagnostics.

The latest in AI for code research is quickly translated by big tech and startups into commercial developer tools. 

Politics

The chasm between academia and industry in large-scale AI work is potentially beyond repair: almost 0% of work is done in academia. 
Academia is passing the baton to decentralized research collectives funded by non-traditional standards. 
AI continues to be infused into a greater number of defense product categories and defense AI startups receive even more funding.

Corporate Artificial Intelligence Labs rush into AI for Code Research 

OpenAI’s Codex, the driving force behind Github Copilot, has amazed the computer science community with its impressive ability to process code from both multiple lines or just natural language instructions. This breakthrough has led to further research from tech giants such as Salesforce, Google, and Deepmind.

Salesforce researchers use the conversational Codegen to specify coding requirements in multi-turn language interactions with the help of Language Understanding Machine Learning(LLM). This is the only open-source model to be on par with Codex!

Perhaps even more impressive — Google’s LLM PaLM achieved remarkable results on its training data, with 50 times less code compared to Codex. Furthermore, when fine-tuned on Python code, PaLM was able to outperform existing competitors on the code repair task, Deepfix, with 82% accuracy compared to the 71.7% SOTA.

Deepmind’s AlphaCode is a system that can generate complete programs for coding challenges. It achieved impressive results on Codeforces, a coding competition platform, by pre-training on GitHub data and fine-tuning with Codeforces problems and solutions. From millions of potential solutions, the system samples, filters, and clusters arrive at 10 final submissions.

Check This Out 

Language models can be trained to use tools such as search engines and calculators with minimal effort — by providing a text-based interface and a small set of human-demonstrated examples.

OpenAI’s WebGPT was the first model to demonstrate this convincingly by fine-tuning GPT-3 to interact with a search engine to provide answers grounded with references. This merely required collecting data of humans doing this task and converting the interaction data into text that the model could consume for training by standard supervised learning . 

Adept AI, a new AGI company, is commercializing this paradigm. The company trains large transformer models to interact with websites, software applications and, APIs in order to drive workflow productivity. 

How DALLE-E 2 and Stable Diffusion are related 

DALL-E 2 and Stable Diffusion are related in that they are both based on Diffusion Models (DMs). DMs learn to reverse successive noise additions to images by modeling the inverse distribution as a Gaussian, parameterized as a neural network. These models have applications in text-to-image generation, controllable text generation, model-based reinforcement learning, video generation, and molecular generation. DMs are slower at inference time than other techniques, but newer methods such as denoising in a lower-dimensional space allow them to generate higher-quality samples with less diversity.

OPEN AI & GOOGLE Text-To-Image Generation 

The DALL-E second iteration was released mid 2022, which came with a major jump in the quality of images generated. 

Parti treats text-to-image generation as a sequence-to-sequence task, predicting a representation of the pixels of the image. It can acquire new abilities, such as spelling, as the number of parameters and training data increase. Other text-to-image models include GLIDE (OpenAI), Make-a-scene (Meta) and CogView2 ( Tsinghua, BAAI), which can handle both English and Chinese.

Who Will win the Text-To-Video generation race?

The race to develop the best text-to-video generation technology is a battle between Meta and Google. Meta released Make-a-Video, a diffusion model for video generation, while Google developed two models: Imagen, a diffusion model, and Phenaki, a non-diffusion model which can adapt dynamically with additional prompts. It remains to be seen who will come out on top.

The compounding effects of Government contracting in AI

In 1962, the US government bought all available integrated circuits catalyzed the development of this technology and its end market. Now some, governments are repeating this approach by acting as “buyers of first resort” for AI companies, giving them access to exclusive data sets that allow them to create superior consumer and enterprise AI software.

Research has found that Chinese AI companies that sign more government contracts produce more AI software. Chinese companies have become the major players in the computer vision software space, and the same principle likely applies to other heavily regulated sectors such as defense and healthcare, where expertise gained from unique data can be used to create everyday AI products.

How should BIG TECH deal with their language model consumer products? 

Meta released the BlenderBot3 Chatbot for free public use in Aug 2022, but was giving wrong information and was all over the media. Google took this time to publish a paper on their chatbot and announced a larger initiative called “AI Test Kitchen”, where regular users will be able to interact with Google’s latest AI agents, including LaMDA. 

Large-Scale release of AI systems to the 1B+ users of Google and Facebook all but ensures that every ethics or safety issue with these systems will be surfaced, either by coincidence or by adversarially querying them. But only by making these systems widely available can these companies fix those issues, understand user behavior and create useful and profitable systems. 

Running away from this dilemma, 4 of the authors of the paper introducing LaMDA went on to found/Join Character.AI, which is “ An AI company creating revolutionary-open-ended-conversational application”

TALENT FROM TOP-TIER AI LABS VENTURE OFF

Alums from Open-AI and Deepmind are pursuing research in a variety of fields, including AGI, AI Safety, biotech, fintech, energy, dev tools and robotics! Meta, a company that allowed its AI research group to pursue their own projects without the pressure of a product roadmap, concluded that a centralized organization can be beneficial, but it can hamper the ability to fully integrate their research.

DeepMind talent created new startup like:

Inflection — Raised 225M in 2022

Isomorphic Laboratories 

ShiftLab 

Open Climate Fix 

Equilibre Technologies 

Recursive 

Phaidra 

CSM

Diagonal 

Haiper 

Saiga 

Kosen Labs 

OpenAI talent went off and created:

Adept AI — Raised 65M in 2022

Covariant 

Anthropic — Raised 580Million in 2022 

Daedalus 

Gantry 

Living Carbon 

Conception

Pilot 

Titan 

AI Coding Assistants are being deployed fast, with early signs of developer gains 

In July 2021, OpenAI released Codex, a research-based tool, for commercial use in June 2022, in partnership with Microsoft’s Github Copilot. In the same month, Amazon launched Code Whisperer, while Google revealed they are using an internal ML-powered code completion tool.

HEALTHCARE + AI 

Oxipit, a Lithuanian startup, has achieved a groundbreaking certification for their computer vision-based diagnostic system. This system can now autonomously detect abnormalities in chest X-rays, eliminating the need for radiologists to manually review them.

With radiologists in short supply and imaging volumes on the rise, it has become difficult to accurately diagnose which X-rays contain disease and which do not.

Oxipit’s ChestLink is an AI system that can accurately identify normal chest scans. It has been trained on over a million images and, in a study of 10,000 chest X-rays of Finnish Primary health care patients, it achieved a sensitivity of 99.8% and a specificity of 36.4% for recognizing clinically significant pathology.

This means that it can reduce workload by 36.4% without compromising patient safety.

University are hotspots for AI spinouts — according to UK studies [ Beauhurst | GOV.UK ] 

Companies such as:

Databricks 

Snorkel 

SambaNova 

Exscientia 

Spinout.FYI — An open database to help founders and policymakers fix the spinout problem

Spinout.FYI crowdsourced a database of spinout deal terms from founders representing > 70 universities all over the world. The database spans AI and non-AI companies across different product categories (software, hardware, medical, materials, etc). A major reason for the tech current situation is the information asymnetry between founders and TTOs, and the spinout.fyi database aims to give founders a leg up in the process. 

When college programs end, what comes next?

UC Berkeley and Stanford launched two successful programs, AMPLabs and RISELab/DAWN, respectively. These programs focused on developing Big Data and AI technologies and resulted in the spin-out of multiple companies, such as Databricks and Anyscale.

Top AI “Unicorns” across the US.

  • Illumina 
  • Cruise 
  • Nuro
  • 4Paradigm

Plus many more!

Politics

The Torch is passing from academia to decentralized research collectives 

Decentralized research projects have seen increasing membership, funding, and growth this year. China’s Tsinghua GLM-130B LLM made a particularly impressive large-scale academic project. Eleuther released the 20B parameter GPT-NeoX, and Hugging Face led the BigScience initiative with the 17B parameter BLOOM multilingual LLM. Stability was a surprise entrant that obtained 4,000 A100 GPUs, uniting open-source communities and releasing Stable Diffusion, a project previously thought to be only achievable by large centralized technology companies.

AI Continues to be infused into a greater number of defense product categories. 

Defense technology companies are increasingly applying AI to enhance electric warfare, geospatial sensor fusion, and autonomous hardware platforms. Epirus, founded in 2018, has developed an advanced electromagnetic pulse weapon to protect against dangerous drone swarms. Sweeden’s Saab has created COMINT and C-ESM sensors to provide automated and operator-controlled surveillance depending on context. Modern Intelligence, founded in 2020, has created a platform-independent AI for geospatial sensor data fusion, situation awareness, and maritime surveillance. Anduril has grown through both organic and inorganic growth by acquiring companies such as Area-I and Dive Technologies to expand their autonomous hardware platforms and capabilities.

AI in defense gathers big funding momentum 

Heavily funded start-ups and Amazon, Microsoft, and Google continue to normalize the use of AI in defense.  

AI in defense is receiving increased investment, with big names such as Amazon, Microsoft and Google leading the way in normalizing its use. NATO has established a $1 billion fund to invest in dual-use technologies and European defense AI company Helsing has secured a Series A led by Daniel EK of Spotify. Microsoft, Amazon and Google are all vying for a major role in defense, with Microsoft’s $10 billion Pentagon contract recently canceled and likely to be re-awarded in late 2022. Other companies have also seen success, such as Anduril who landed their largest DoD contract to date and Shield AI,a developer of military drones, who raised a $2.3 billion valuation.

AI Safety 

The UK is taking the lead in acknowledging these uncertain but catastrophic risks 

The UK’s national strategy for AI, published in late 2021, notably made multiple references to AI safety and the long-term risks posed by misaligned AGI.

“While the emergence of Artificial General Intelligence (AGI) may seem like a science fiction concept, concern about AI safety and non-human-aligned systems is by no means restricted to the fringes of the field.”

“We take the firm stance that it is critical to watch the evaluation of the technology, to take seriously the possibility of AGI and ‘more general AI’ and to actively direct the technology in a peaceful, human-aligned direction.”

The government takes the long term risk of non-aligned AGI, and the unforeseeable changes that it would mean for the UK and the world, seriously”. 

“[We must] establish medium and long term horizon scanning functions to increase government’s awareness of AI safety.”

[We must] work with national security, defense, and leading researchers to understand how to anticipate and precent catastrophic risks.”

AI researchers increasingly believe that AI safety is a serious concern 

A survey of the ML and NLP communities has revealed a growing consensus that AI safety should be prioritized more than it currently is, and that AGI is an important concern we are making progress towards. Furthermore, a majority of respondents believe that AI will cause social changes comparable to the Industrial Revolution this century, and nearly 40% think AI could even cause a catastrophe as bad as nuclear war.

AI safety is attracting more talent… yet remains extremely neglected 

Awareness of AI existential risk is leading to more research efforts than ever before. Around 300 researchers are now working full-time on AI safety, though this is still significantly fewer than in the broader field. Non-profit research labs, such as the Center for AI Safety and the Fund for Alignment Research, are being established. Additionally, the Centre for the Governance of AI has been spun out from the Future of Humanity Institute in Oxford. There has been a surge of interest in education programs, with over 750 people taking part in the online AGI Safety Fundamentals course. Additionally, new scholarships, such as the Vatalik Butherin PhD Fellowship in AI Existential Safety, have been created. Finally, OpenAI’s Chief Scientist Ilya Sutskever has shifted to spending half of his time on safety research.

Conjecture is the first well funded startup purely focused on AGI alignment

Unlike Deepmind, Google Brain, OpenAI and other major research labs, Conjecture is primarily focused on AI Alignment, with an emphais on cenceptual research and “uncorrelated bets” distinct from other organizations. 

Conjecture is a London based start-up, led by Conner Leahy who previously co-founded Eleuther- the organisation that kicked off decentralised development of Large AI models. 

Conjecture’s operates under the assumption that AGI will be developed in the next 5 years, and on the current trajectory will be misaligned with human values and consequently catastrophic for our species. 

They are the first AI Alignment group to have published their internal infohazard policy. 

This continues a broader trend of some new AGI focused labs taking alignment research more seriously.

Summary

End Of The Article!

If you made it this far, thank you and I hope you are excited and eager to dive deeper into AI, AGI, ML, LLM’S and more!

The State of AI Report 2022 highlights the development of text-to-image, video, and code generation technologies, as well as the use of AI for drug discovery, plastics recycling, nuclear fusion reactor control, and natural product discovery. AI is increasingly being incorporated into defense product categories, with defense AI startups receiving more funding. AI coding assistants are rapidly adopting, with early signs of profits for creators, and the UK is leading the way in recognizing the uncertain but disastrous risks of AI. Universities are becoming hotspots for AI spinouts, and Conjecture is the first well-funded startup focused solely on AI adaptation.

Overall, with implications for business, industry, and politics, AI is advancing rapidly and gaining momentum across multiple sectors. To ensure a positive future for AI, it is important to remain aware of the potential risks of AI and to continue to develop research and security measures.

Thank You to the creators of State Of AI!

References:

  1. State of AI
  2. Anthorpic
  3. OpenAI 
  4. Deep Mind 
  5. Google Palm
  6. Google LaMDA
  7. Imagen 
  8. Parity 
  9. Adept.ai
  10. Nato — Summary of NATO AI strategy 

Connect

If you’d like to keep up with my writing and projects, connect with me on Linkedin. Follow the links below to learn more!

Check out my website — coverletterbuilder — and read my official blog post to unlock the power of automated background removal with AI.

Official Blog —  https://aiapplicationsblog.com/