You only need to include the LlamaIndexInstrumentor and call its instrument method to enable the instrumentation.
importlogfirefromllama_index.coreimportVectorStoreIndexfromllama_index.llms.openaiimportOpenAIfromllama_index.readers.webimportSimpleWebPageReaderfromopentelemetry.instrumentation.llamaindeximportLlamaIndexInstrumentorlogfire.configure()LlamaIndexInstrumentor().instrument()# URL for Pydantic's main concepts pageurl='https://docs.pydantic.dev/latest/concepts/models/'# Load the webpagedocuments=SimpleWebPageReader(html_to_text=True).load_data([url])# Create index from documentsindex=VectorStoreIndex.from_documents(documents)# Initialize the LLMquery_engine=index.as_query_engine(llm=OpenAI())# Get responseresponse=query_engine.query('Can I use RootModels without subclassing them? Show me an example.')print(str(response))"""Yes, you can use RootModels without subclassing them. Here is an example:```pythonfrom pydantic import RootModelPets = RootModel[list[str]]my_pets = Pets.model_validate(['dog', 'cat'])print(my_pets[0])#> dogprint([pet for pet in my_pets])#> ['dog', 'cat']"""
The LlamaIndexInstrumentor will specifically instrument the LlamaIndex library, not the LLM itself.
If you want to instrument the LLM, you'll need to instrument it separately: