I followed this tutorial
I conducted some changes:
I built model gemma:2b from Ollama
I used this model replace with ChatOpenAI
Summary
- steps/index_generator.py
2. steps/agent_creator.py
After ran successfully pipeline, it created a agent:
I want to use this agent to serving question/answer service
Here's what I tried in another python
from zenml import step, pipeline
from zenml.client import Client
client = Client()
agent = Client().get_artifact_version('86cb0da2-ca22-48ec-9548-410ccb073bc2') # type(agent) is langchain.agents.agent.AgentExecutor
question = "Hi!"
agent.run({"input": question,"chat_history": []})
it raised the error, How can I overcome this
OllamaEndpointNotFoundError: Ollama call failed with status code 404. Maybe your model is not found and you should
pull the model with `ollama pull llama2`.
Update
I can interact with gemma model via cli





