Create Your Own Prompt Enhancer from Scratch

Author:Murphy  |  View: 28060  |  Time: 2025-03-22 20:02:12

Using Python, Gradio & Groq

If you are an OpenAI user, you might be aware that very recently they added an enhancement to their system prompt on the OpenAI Playground website.

Now, you have the option to ask OpenAI to generate a new, detailed and highly focussed prompt that you can use in place of your initial, simpler prompt.

To test this out, head over to the playground using the link below and log in with your openAI credentials if you're not already.

https://platform.openai.com/playground/

You should see a screen like this.

Image from OpenAI website

If you look at the two icons on the right-hand side of the System Instructions input box, you'll see one that looks like a 4-pointed star. This is the prompt generator and is what will turn your simple prompt into a highly focused and detailed one.

Notice that this icon does not appear for the user prompt input box at the bottom of the screen. But, the secret is, you don't have to use the prompt generator just for system prompts. Instead, you can type in any prompt you would like to enhance, then copy and paste the prompt generator output wherever you need to use it, including to the user prompt input box.

So, click on the star icon, and you will get a pop-up appearing like this,

Image from OpenAI website

Now type in, for example, a relatively simple prompt like this,

You are an expert at growing apple trees. What is the best way 
to grow an apple tree.

After clicking the Create button, I received this as my output after a minute or so.

Provide detailed guidance on the best way to grow an apple tree, including preparation, planting, care, and maintenance.

# Steps

1. **Preparation:**
   - Choose a suitable location with full sun and well-drained soil.
   - Test the soil for pH and nutrients, aiming for a pH of 6.0 to 7.0.

2. **Selecting the Tree:**
   - Select a variety that suits your climate and soil conditions.
   - Decide between a standard, semi-dwarf, or dwarf rootstock based on available space.

3. **Planting:**
   - Plant bare-root apple trees in early spring, as soon as the ground can be worked.
   - Dig a hole twice as wide but no deeper than the root ball.
   - Backfill the hole with native soil, positioning the graft union above ground level.

4. **Initial Care:**
   - Water thoroughly after planting to remove air pockets.
   - Apply mulch around the base to retain moisture and prevent weeds.

5. **Ongoing Care:**
   - Water regularly, especially during dry periods.
   - Prune annually to maintain shape and remove diseased or crossing branches.
   - Fertilize in early spring with a balanced fertilizer formulated for fruit trees.

6. **Pest and Disease Management:**
   - Monitor for common pests and diseases, applying organic or chemical controls as necessary.

# Output Format

Provide the information in a detailed paragraph format, organizing the content logically under each step heading as listed above.

# Notes

- Make sure to emphasize the importance of selecting a suitable variety for your specific climate conditions.
- Ensure that pruning and pest management techniques are described concisely yet comprehensively.
- Highlight any necessary adjustments if the apple tree exhibits signs of stress or disease.

Wow, that is quite some improvement, don't you agree? As mentioned, you can now take the above prompt and copy and paste it into the user prompt or to any other Llm you might want to use.

For the rest of this article, I want to show that there is no "magic" to implementing this kind of functionality. In fact, I'll show how to emulate OpenAI's prompt enhancer using the open source tools GROQ (for its LLM capabilities) and Gradio (for our front-end).

If you need a refresher on what Groq is, or you've never heard of it before, check out my article below for a quick introduction.

Groq and its LPU: Revolutionizing AI Computation

Gradio is an open-source Python GUI package that eases the process of building demos or web applications for Machine Learning models, APIs, or any Python function.

You can find out more about Gradio here.

Gradio: Rapid GUI Prototyping

Setting up a Python dev environment.

Before starting a new project, we should set up a separate coding environment for it. I use Conda for this but use whichever method you're used to. Check out this link to see an example of how I do things.

Once your environment is set up, make sure you:-

  • Install Groq, Gradio and Jupyter
Python">pip install jupyter
pip install groq
pip install gradio
  • get a Groq API key

You can get a key by going to,

https://console.groq.com

Login (or create an account – it's free!). On the left-hand side of the screen, there will be a Keys menu item. Click on that and follow the instructions to create a new key and take note of it, as you won't be able to access it after it's been displayed to you.

One of the great things about Groq is that it gives you access to many of the top open-source models currently available. To see what's there, click the link below,

GroqCloud

I'm going to use the recently released Llama 3.1 70B model from Meta.

The Python code

Our code is split into two main parts. The first implements the basic functionality to get an input prompt from the user. This will be passed to the Llama 3.1 70B model via Groq to generate a more detailed, focussed prompt that can be used as input to any LLM, which hopefully will result in a better response.

The second part will be to put a front-end onto our mini app using Gradio. This will be very simple, just one input field for the initial user prompt, and one output field to display the enhanced LLM-generated prompt. We'll also have a single button that will call the code we develop in part 1.

Part 1

from groq import Groq
# Initialize the Groq client
# In production systems we would use environment
# variables or another, more secure, method
# to deal with API keys
#
GROQ_API_KEY="YOUR_GROQ_API_KEY"

client = Groq(api_key=GROQ_API_KEY)
MODEL = "llama-3.1-70b-versatile"

# System message for the LLM
SYSTEM_MESSAGE = """
You are an assistant that takes a user's prompt and turns into a highly detailed,
focussed and relevant prompt. Do NOT answer the user's prompt.Just display the final prompt
that you come up with and NOTHING else. If your prompt is less than 200 words
it's UNLIKELY that it's detailed enough and you should try again. 
"""

# User prompt
user_prompt = """
You are an expert at growing apple trees. What is the best way 
to grow an apple tree.
"""

# Initial messages
messages = [
    {"role": "system", "content": SYSTEM_MESSAGE},
    {"role": "user", "content": user_prompt},
]

# Function to call the model
def call_llm(messages):
    response = client.chat.completions.create(
        model=MODEL,
        messages=messages,
        max_tokens=4096,
    )
    return response

response = call_llm(messages)

print(response.choices[0].message.content)

As you can see, we used the exact same initial prompt that we used with OpenAI. Let's see what we came up.

"Provide a comprehensive, step-by-step guide on growing a high-yielding 
and disease-resistant apple tree, focusing on the key factors that contribute 
to optimal growth and fruit production.nn- Start by specifying the ideal 
climate conditions for apple trees, including the recommended temperature 
range, sunlight exposure, and soil pH, highlighting the best regions and 
microclimates that are most suitable for cultivation.nn- Discuss the 
importance of selecting a disease-resistant and climate-suitable apple tree 
variety, providing examples of popular varieties for different climates and 
regions, such as 'Gala' and 'Granny Smith' for warm climates, and 'Braeburn' 
and 'McIntosh' for cooler climates.nn- Outline the optimal soil preparation 
process, including the necessary nutrients and organic matter requirements for 
apple trees, highlighting the benefits of composting, mulching, and cover 
cropping.nn- Describe the proper technique for planting an apple tree, 
including the recommended planting depth, spacing, and tree size, emphasizing 
the need for adequate support and pruning during the early growth stages.nn- 
Emphasize the critical importance of regular watering, fertilization, and pest 
management, providing tips on how to identify common pests and diseases, such 
as aphids, codling moths, and powdery mildew, and how to control them through 
integrated pest management (IPM) strategies.nn- Discuss the role of pruning 
in promoting healthy growth, fruiting, and air circulation within the tree, 
providing instructions on the best pruning techniques for different tree sizes 
and shapes, including the open center, central leader, and espalier 
systems.nn- Highlight the significance of thinning fruit to achieve optimal 
yields and fruit quality, outlining the recommended thinning times and methods,
 and emphasizing the importance of providing adequate fruit support and 
protection from pests and diseases.nn- Conclude by discussing the importance 
of ongoing monitoring and maintenance, including regular soil testing, nutrient
 management, and tree health assessments, to ensure the long-term productivity 
and health of the apple tree."

Not too shabby at all.

OK, I'm happy that our prompt enhancer is working well. Next, we need to design our front end using Gradio.

Part 2

For this part, I asked ChatGPT how to integrate the code from Part 1 into a Gradio app.

In ChatGPT (model 4o) I typed in the following,

I have developed some prompt enhancer code, shown below that I want to 
incorporate into a Gradio app. Can you code this up for me. The Gradio 
screen should be simple, one input box to hold the initial user's prompt 
string, and below that, a larger output box to dipslay the result of the 
prompt enhancer output. Blow both of these should be a button titled generate 
that calls my prompt enhancer function. Here is my existing prompt enhancer 
code. Please amend as required so that it works with my Gradio app ...

... I put my code from part 1 was here ...

After a few tweaks, mostly due to making the output box re-sizable with some CSS, ChatGPT returned this code.

import gradio as gr
from groq import Groq

# Initialize the Groq client with your API key
GROQ_API_KEY="YOUR_GROQ_API_KEY"
client = Groq(api_key=GROQ_API_KEY)
MODEL = "llama-3.1-70b-versatile"

# System message for the LLM
SYSTEM_MESSAGE = """
DO NOT answer the user's prompt, instead you will take the user's prompt 
and convert it into a highly detailed, focused, and relevant prompt.
If your prompt is less than 200 words, it's UNLIKELY that it's detailed enough 
and you should try again. Just display the final prompt that you come up with 
and NOTHING else. 
"""

# Function to call the LLM
def call_llm(user_prompt):
    try:
        messages = [
            {"role": "system", "content": SYSTEM_MESSAGE},
            {"role": "user", "content": user_prompt},
        ]
        response = client.chat.completions.create(
            model=MODEL,
            messages=messages,
            max_tokens=4096,
        )
        return response.choices[0].message.content
    except Exception as e:
        return f"Error: {str(e)}"

# Gradio interface function
def enhance_prompt(user_input):
    return call_llm(user_input)

# Custom CSS to make only the output box resizable with the stretch icon
custom_css = """
#output_box textarea {
    resize: both; /* Allow both horizontal and vertical resizing */
    overflow: auto; /* Enable scrollbars when needed */
    min-height: 150px; /* Minimum height for the box */
    max-height: 500px; /* Maximum height for the box */
}
"""

# Create the Gradio interface
with gr.Blocks(css=custom_css) as demo:
    gr.Markdown("## Prompt Enhancer")

    # Input for user's prompt
    user_input = gr.Textbox(
        label="Enter your prompt here:",
        placeholder="Type your prompt here...",
        lines=3
    )

    # Output for enhanced prompt with resizable option
    output = gr.Textbox(
        label="Enhanced Prompt:",
        placeholder="Your enhanced prompt will appear here...",
        lines=8,
        elem_id="output_box"  # Adding an ID for custom CSS
    ) 

    # Button to generate the enhanced prompt
    generate_btn = gr.Button("Generate Enhanced Prompt")

    # Bind the button to the function
    generate_btn.click(fn=enhance_prompt, inputs=user_input, outputs=output)

# Launch the app
demo.launch()

I ran this code, entered the same prompt as before and the output looked like this,

Image by Author

I'm pretty happy with that. But if the enhanced prompt isn't what you want, just re-generate another one. Because we're using Groq it's very quick at doing that.

Summary

In this article, I introduced the enhanced prompt generator feature recently added by OpenAI and demonstrated how to use it. Moreover, I provided insights into how straightforward it is to replicate such functionality using coding.

I started by guiding you through the setup of a new development environment necessary for any coding project. Next, I illustrated how to emulate OpenAI's functionality using open-source LLMs, specifically Meta's Llama 3.1 70B model through Groq, coupled with a front-end developed via Gradio. I included the complete source code required to implement this solution.

Our development was divided into two key parts. Initially, the code processes a basic user prompt and refines it using Groq to produce a new, more focused and detailed prompt. In the second part, I leveraged ChatGPT to help integrate the initial code into our Gradio front-end, successfully running the combined application.

I aimed to show you the effectiveness and ease of using tools like Groq and Gradio to enhance user interactions with machine learning models. This guide serves as a comprehensive tutorial on replicating the sophisticated prompt enhancement feature similar to that of OpenAI.

_OK, that's all for me just now. I hope you found this article useful. If you did, please check out my profile page at this link. From there, you can see my other published stories and subscribe to get notified when I post new content._

I think you'll find these articles interesting if you like this content.

Process audio data with Gemini Pro 1.5

DuckDB V1 released!

Tags: AI Llm Machine Learning Prompt Engineering Python

Comment