Now we have a LLM running locally with an API we build a small CLI client as a proof of concept.
Run the following
cargo new llm-cli
then a quick sanity check
I generally don't install OpenSSL and use RustTLS instead. This is in an effort to keep the size of deployment containers down.
openai-api-rs does require OpenSSL so we need the following.
sudo apt-get install -y pkg-config
use Client; use ; async