Researchers at Stanford University have released a new report which assesses the transparency of AI foundation models of companies such as OpenAI and Google.
The report aims to call for companies to uphold greater levels of foundation model transparency, offering an initial measure for governments struggling to regulate this complex and ever-growing sector.
The index, from the Stanford Institute for Human-Centered Artificial Intelligence’s Center for Research on Foundation Models, is measured on a variety of different parameters; including openness on data and human labor which is used to train the respective models.
AI systems, trained on large-scale data sets, are known as foundation models which are capable of performing a wide array of tasks. Including writing and coding, foundation models are often developed by companies driving the surge in generative AI. It has been widely reported that over the past three years, transparency has been in decline while AI capabilities have been on the rise; a sentiment highlighted by Stanford professor Percy Liang.
The report includes an index which grades 10 popular models on 100 different indicators for transparency; featuring training data of the models and how much compute was used to create the product. Findings revealed that all models featured scored “unimpressively”, with the model of highest transparency, Meta’s Llama 2, receiving a score of just over half with 53 out of 100, while OpenAI’s GPT-4 scored 47. In contrast, Amazon’s Titan received the lowest ranking with just 11 out of 100 for transparency.
Since ChatGPT’s launch, a Microsoft-backed OpenAI product, businesses of all scales have expressed interest, capturing attention across the industry. As the demand for, and reliance on these models for decision-making and automation rises, authors of the report stress the importance for greater comprehension of their limitations and biases.