in

Anthropic’s Claude adds a prompt playground to quickly improve your AI apps

Anthropic’s Claude adds a prompt playground to quickly improve your AI apps
Announcement


Prompt engineering became a hot job last year in the AI industry, but it seems Anthropic is now developing tools to at least partially automate it.

Announcement

Anthropic released several new features on Tuesday to help developers create more useful applications with the startup’s language model, Claude, according to a company blog post. Developers can now use Claude 3.5 Sonnet to generate, test, and evaluate prompts, using prompt engineering techniques to create better inputs and improve Claude’s answers for specialized tasks.

Language models are pretty forgiving when you ask them to perform some tasks, but sometimes small changes to the wording of a prompt can lead to big improvements in the results. Normally you’d have to figure that wording out yourself, or hire a prompt engineer to do it, but this new feature offers quick feedback that could make finding improvements easier.

The features are housed within Anthropic Console under a new “Evaluate” tab. Console is the startup’s test kitchen for developers, created to attract businesses looking to build products with Claude. One of the features, unveiled in May, is Anthropic’s built-in prompt generator; this takes a short description of a task and constructs a much longer, fleshed out prompt, utilizing Anthropic’s own prompt engineering techniques. While Anthropic’s tools may not replace prompt engineers altogether, the company said it would help new users, and save time for experienced prompt engineers.

Within “Evaluate,” developers can test how effective their AI application’s prompts are in a range of scenarios. Developers can upload real world examples to a test suite or ask Claude to generate an array of AI-generated test cases. Developers can then compare how effective various prompts are side-by-side, and rate sample answers on a five point scale.

A prompt being fed generated data to find good and bad responses.

In an example from Anthropic’s blog post, a developer identified that their application was giving answers that were too short across several test cases. The developer was able to tweak a line in their prompt to make the answers longer, and apply it simultaneously to all their test cases. That could save developers lots of time and effort, especially ones with little or no prompt engineering experience.

Announcement

Anthropic CEO and Co-Founder Dario Amodei said prompt engineering was one of the most important things for widespread enterprise adoption of generative AI in an interview from Google Cloud Next earlier this year. “It sounds simple, but 30 minutes with a prompt engineer can often make an application work when it wasn’t before,” said Amodei.





Source link

Announcement

What do you think?

Written by Politixia

Announcement
Announcement

Leave a Reply

Your email address will not be published. Required fields are marked *

Announcement
Powell’s Senate testimony, FTC ruling on drug prices: Catalysts

Powell’s Senate testimony, FTC ruling on drug prices: Catalysts

Meagan Good Defends Dating Jonathan Majors, Says Friends Advised Her Not to Be With Him | Jonathan Majors, Meagan Good | Just Jared: Celebrity News and Gossip

Meagan Good Defends Dating Jonathan Majors, Says Friends Advised Her Not to Be With Him | Jonathan Majors, Meagan Good | Just Jared: Celebrity News and Gossip