The Scott Logic sustainability team has recently been updating the open-source Technology Carbon Standard website to better reflect evolving challenges of carbon accounting in the tech sector.

The latest revisions focus on expanding coverage of the standard to address the carbon footprint of Large Language Models (LLMs), with plans to broaden the scope to include other areas of AI in future updates.

In this regard, this update introduces a new category in Upstream Emissions called “Content”. This category encompasses foundation models, as well as content and data treated as commodities by organisations. It examines emissions associated with the generation, distribution, storage and archiving of content, whether AI-generated or traditional.

Some of the key additions include:

  • Remove the word “proposed” from the Technology Carbon Standard, reflecting its established role in guiding sustainable practices.
  • Establish the Technology Carbon Standard position with regard to the Life Cycle Assessment (LCA) methodology.
  • Expand the Tech Carbon Standard glossary to include AI related terminology.
  • Create a resource listing major AI cloud platforms and information released by these platforms where available.
  • Develop a practical guide to reducing AI-related emissions helping organisations, regardless of their AI use cases, to identify key strategies to help reduce their carbon emissions, from optimising hardware and refining prompts, through to leveraging compression techniques and selecting appropriate models.

In the Upstream Emissions category:

  • Highlight the computationally intensive training phase of foundation models including emissions associated with manufacturing specialised hardware and AI data centre operations. This section is designed to help organisations developing language models assess the environmental impact of their assets and supply chain.
  • Account for emissions linked to content and data in all its forms. By doing so, organisations trading in these commodities can better identify the carbon hotspots happening before acquisition.

In the Operational Emissions category:

  • Add a sub-category for machine learning, fine-tuning and self-hosting. This section focuses on hardware requirements associated with manipulating foundation models. It emphasises the distinction between Large Language Models (LLMs) and smaller language models (SLMs), as their environmental impact differ significantly due to the variations in parameter scale and computational requirements.
  • Recognise the growing adoption of AI-powered applications in the Networking Devices section, and their reliance on specialised hardware for accelerated computing.
  • In the same category, address emissions associated with organisations treating content and data as a commodity as network infrastructures including content delivery networks (CDNs) and edge servers can add their own energy overhead.
  • Address the increased energy consumption linked to evolving hardware requirements under Onsite Employee Devices to account for the higher-spec devices needed for employees to run AI applications effectively.
  • In the same category, emissions associated with streaming, downloading and creating content are also discussed to enable organisations that interact with content and data as part of their operations to account for their cumulative environmental impact.
  • Include emissions associated with developing and hosting an AI product on cloud platforms. AI workloads are more energy-intensive than traditional computing, primarily due to the high computational demands of training and inference process, which require significant processing power, cooling and data storage.
  • Under SaaS, develop the concept of Software-as-a-Service and explore the carbon impact of AI-driven SaaS solutions. This section examines the environmental impact of AI-driven SaaS solutions, enabling organisations that use AI through web interfaces, AI-powered tools or API-based applications to assess their carbon emissions.

In the Downstream Emissions category:

  • Account for the carbon cost of customer inference activities and the cumulative impact on AI application usage. While the per-request impact may be small, the cumulative effect of millions or billions of interactions can become significant, especially as AI adoption scales globally.
  • In the same section, address emissions arising from customer interactions with content and data, including streaming, content creation and AI-enabled features.
  • Highlight emissions associated with the amount of network traffic involved in AI applications under Network Data Transfer. As AI models increasingly rely on real-time data exchange, cloud-based processing, and distributed systems, the energy required to manage this data can contribute significantly to the overall carbon footprint of AI deployments.
  • In the Downstream Infrastructure section, highlight scenarios where customers host AI applications themselves. This invovles hardware demands, such as high-performance GPUs, RAM and advanced cooling systems required to support workloads, all of which contribute to the overall environmental impact of deployment.

The goal of the Technology Carbon Standard is to provide the information technology sector with the knowledge needed to reduce their climate impact. As an open-source project, contributions and improvements are welcomed from the community.

By increasing awareness and transparency around the carbon emissions of technology, the Technology Carbon Standard aims to accelerate the IT industry’s transition to net zero.

Check out the website to learn more at http://www.techcarbonstandard.org.

Get involved via the GitHub repo at https://github.com/ScottLogic/Technology-Carbon-Standard.