Grok is the State-of-the-Art AI Agent developed by xAI. They have given 25$ worth of free credit to developers for using their APIs to develop and utilize Grok in general.¶

1- Code Notes for X_Post_Replies¶

In [ ]:
#importing the required libraries, please make sure you have them installed first, or install using !pip
from openai import OpenAI
import gradio as gr

XAI_API_KEY = "Your API Key goes here"
client = OpenAI(api_key=XAI_API_KEY, base_url="https://api.x.ai/v1")

We have to use OpenAI library to use as client to retrieve and interact with Grok's client interface easily, making API based authentication easy to access. base_url makes the API call specific to Grok API¶

Defining a function to interact with Grok, integrating the chosen style by the user and the content provided. Then the function goes on generate chat completion utitlity, displaying the very first message content in opted style. client.chat.completions.create is method to request to Grok to generate a text-based response based on the provided input messages.¶

In [ ]:
def generate_reply(tweet_content, style): #function to send message to Grok
    system_message = f"""You are an assistant trained to respond in the style of {style}.
    Analyze the following tweet and reply humorously in 3-4 sentences, maintaining the wit characteristic of {style}.
    Tweet: {tweet_content}"""

    completion = client.chat.completions.create(
        model="grok-beta",
        messages=[{"role": "system", "content": system_message}]
    )
    reply = completion.choices[0].message.content
    reply_sentences = reply.split('. ')
    reply = '. '.join(reply_sentences[:4]) + ('.' if len(reply_sentences) > 4 else '') #reply_sentences limits the reply to 4 sentences

    return reply

defining function to get witty reply from the system generated chat completion¶

In [ ]:
def witty_reply(tweet_content, style):#function for input/output interaction with Gradio
    reply = generate_reply(tweet_content, style)
    return reply

launching Gradio interface¶

  • intiating with gr.Blocks(), to launch the Gradio interface.
  • getting user input in the form of tweet_content and style, providing the option choice from Chandler and Niles
  • finally displaying the buttons to generate to get replies and launch the Gradio app in the enviornment
In [ ]:
with gr.Blocks() as iface: #initiating gradio interface
    tweet_content = gr.Textbox(label="Tweet Content", placeholder="Paste the full tweet with username here")
    style = gr.Radio(choices=["Chandler", "Niles"], label="Choose Reply Style")
    witty_response = gr.Textbox(label="Witty Reply", interactive=False)

    # Button to generate reply
    generate_btn = gr.Button("Generate Reply")
    generate_btn.click(witty_reply, inputs=[tweet_content, style], outputs=witty_response)

iface.launch()
In [ ]:
 

2- Code Notes for Grok_Interview_Prep¶

In [ ]:
#importing necessary libraries
import os
from openai import OpenAI
import gradio as gr
In [ ]:
XAI_API_KEY = os.getenv("XAI_API_KEY")  # getting the xAI api keys from HF secret enviornment
client = OpenAI(api_key=XAI_API_KEY, base_url="https://api.x.ai/v1")

There is another way to get the xAI API Keys in the HuggingFace Spaces, we can store the keys in Settings>Secret and then retrieve it in the code¶

In [ ]:
 

Defining the function to get the Grok generated theoretical question according to the user input role, utilizing the model- "grok-beta", calling the Grok API chat to create a question and corresponding answer for input role.¶

In [ ]:
def generate_theory_question_and_answer(role): #messaging Grok to generate question and answer
    system_message = f"Generate a theoretical interview question for a {role} role."
    completion = client.chat.completions.create(
        model="grok-beta",
        messages=[{"role": "system", "content": system_message}]
    )
    question = completion.choices[0].message.content

    system_message = f"Provide a possible answer to the question: '{question}' for a {role} role."
    completion = client.chat.completions.create(
        model="grok-beta",
        messages=[{"role": "system", "content": system_message}]
    )
    answer = completion.choices[0].message.content

    return question, answer

defining a function to run interview simulation to generate question and answer¶

In [ ]:
def run_interview_simulation(role): #calling Gradio
    question, answer = generate_theory_question_and_answer(role)
    return question, answer

Same as Note-1, we have to initiate the Gradio interface, taking user inputs and displaying the generate buttons, only the labels will be different this time around¶

In [ ]:
with gr.Blocks() as iface: #launching Gradio interface
    role = gr.Textbox(label="Job Role", placeholder="Enter job role here")
    theory_question = gr.Textbox(label="Question", interactive=False)
    grok_answer = gr.Textbox(label="Answer", interactive=False)

    # Button for generating question and answer
    generate_btn = gr.Button("Generate Question and Answer")
    generate_btn.click(run_interview_simulation, inputs=role, outputs=[theory_question, grok_answer])

iface.launch()
In [ ]: