Bedrock AI service gets contextual grounding

Amazon was among the tech giants that last year agreed to a set of recommendations from the White House regarding the use of generative AI. The privacy considerations addressed in those recommendations continue to be rolled out, with the latest included in the announcements at the AWS Summit in New York on July 9. In particular, Contextual Grounding for Guardrails for Amazon Bedrock provides customizable content filters for organizations deploying their own generative AI.

AWS responsible AI leader Diya Wynn spoke with TechRepublic in a virtual pre-briefing session about the new announcements and how companies are balancing generative AI’s breadth of knowledge with privacy and inclusion.

AWS NY Summit Announcements: Changes to Guardrails for Amazon Bedrock

Guardrails for Amazon Bedrock, the security filter for generative AI applications hosted on AWS, has new enhancements:

  • Users of Anthropic’s Claude 3 Haiku preview can now refine the model with Bedrock starting July 10.
  • Contextual grounding tests have been added to Guardrails for Amazon Bedrock, which detect hallucinations in model responses for recovery-augmented generation and enumeration applications.

In addition, Guardrails extends to the independent ApplyGuardrail API, which allows Amazon enterprises and AWS customers to apply safeguards to generative AI applications, even if those models are hosted outside of AWS infrastructure. This means app creators can use toxicity filters, content filters and mark sensitive information they want to exclude from the app. Wynn said up to 85% of harmful content can be reduced with custom Guardrails.

Contextual Grounding and the ApplyGuardrail API will be available on July 10th select AWS Regions.

Guardrails for Amazon Bedrock enables customers to customize the content that will embrace or avoid a generative AI model. Image: AWS

Contextual Grounding for Guardrails for Amazon Bedrock is part of the broader AWS Responsible AI strategy

Contextual Grounding aligns with the overall AWS Responsible AI strategy in terms of AWS’ ongoing effort to “advance the science as well as continue to innovate and provide our customers with services that they can leverage in the development of their services, developing AI products,” Wynn said.

“One of the areas that we often hear as a concern or consideration for clients is around hallucinations,” she said.

Contextual grounding – and Guardrails in general – can help mitigate that problem. Guardrails with contextual reasoning can reduce up to 75% of the hallucinations previously seen in generative AI, Wynn said.

The way customers look at generative AI has changed as generative AI has become more mainstream in recent years.

“When we started some of our client-facing work, clients didn’t necessarily come to us, right?” Wynn said. “We’ve, you know, looked at specific use cases and helped support similar development, but the shift in the last year plus has finally been that there’s a greater awareness (of generative AI) and so companies are asking and wanting to understand more about the ways we build and the things they can do to ensure their systems are secure.”

That means “addressing questions of bias” as well as reducing security concerns or AI hallucinations, she said.

Additions to the Amazon Q Enterprise Assistant and other announcements from AWS NY Summit

AWS announced a host of new capabilities and tweaks to products at the AWS NY Summit. Highlights include:

  • A developer customization capability in the Amazon Q enterprise AI assistant to secure access to an organization’s code base.
  • Adding Amazon Q to SageMaker Studio.
  • The general availability of Amazon Q Apps, a tool for implementing generative AI-powered applications based on their company data.
  • Access AI Scale on Amazon Bedrock to customize, configure, and refine AI models.
  • Vector Search for Amazon MemoryDB, accelerates vector search speed in vector databases on AWS.

SEE: Amazon recently announced Graviton4-powered cloud instances, which can support AWS’s Trainium and Inferentia AI chips.

AWS achieves cloud computing training goal ahead of schedule

At its Summit NY, AWS announced that it has followed through on its initiative to train 29 million people worldwide in cloud computing skills by 2025, already surpassing that number. Across 200 countries and territories, 31 million people have taken cloud-related AWS training courses.

AI training and roles

AWS training offerings are numerous, so we won’t list them all here, but free cloud computing training has taken place all over the world, both in person and online. This includes training on generative AI through the AI ​​Ready initiative. Wynn highlighted two roles that can train people for the new careers of the AI ​​era: agile engineer and AI engineer.

“You might not have data scientists necessarily involved,” Wynn said. “They don’t train base models. You might have something like an AI engineer.” The AI ​​engineer will refine the foundational model and add it to an application.

“I think the role of AI engineer is something we’re seeing an increase in visibility or popularity,” Wynn said. “I think the other one is where you now have people responsible for rapid engineering. It’s a new role or skill area that’s needed because it’s not as simple as people think, right, to give your input or speed, the right kind of context and detail to some of the details that you might get out of a big want, to get. language model.”

TechRepublic covered the AWS NY Summit remotely.

+++++++++++++++++++
TechNewsUpdates
beewire.org

Leave a Comment