2

I followed this tutorial

I conducted some changes:

Summary

  1. steps/index_generator.py

enter image description here

2. steps/agent_creator.py

enter image description here

enter image description here

After ran successfully pipeline, it created a agent:

enter image description here

enter image description here

I want to use this agent to serving question/answer service

Here's what I tried in another python

from zenml import step, pipeline
from zenml.client import Client

client = Client()

agent = Client().get_artifact_version('86cb0da2-ca22-48ec-9548-410ccb073bc2') # type(agent) is langchain.agents.agent.AgentExecutor

question = "Hi!"

agent.run({"input": question,"chat_history": []})

it raised the error, How can I overcome this

OllamaEndpointNotFoundError: Ollama call failed with status code 404. Maybe your model is not found and you should 
pull the model with `ollama pull llama2`.

Update

I can interact with gemma model via cli

enter image description here

5
  • Huhu, anyone is here? Commented May 17, 2024 at 2:42
  • Did you pull the gemma model and run in Ollama using it's CLI like "ollama run gemma:2b" ? More information can be found here github.com/ollama/ollama Commented May 17, 2024 at 8:11
  • I mentioned it, "I built model gemma:2b from Ollama locally" Commented May 17, 2024 at 9:31
  • For the mentioned error, looks like the model is not up in Ollama. Please check and confirm that you are able to invoke the model outside your Agent code. Commented May 18, 2024 at 15:15
  • @SubashKunjupillai Sure, I invoked the model outside Commented May 18, 2024 at 15:34

0

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.