> ## Documentation Index > Fetch the complete documentation index at: https://docs.pinecone.io/llms.txt > Use this file to discover all available pages before exploring further. # Pinecone documentation > Pinecone is the leading vector database for building accurate and performant AI applications at scale in production. Set up a fully managed vector database for high-performance semantic search Create an AI assistant that answers complex questions about your proprietary data ## Workflows Use integrated embedding to upsert and search with text and have Pinecone generate vectors automatically. [Create an index](/guides/index-data/create-an-index) that is integrated with one of Pinecone's [hosted embedding models](/guides/index-data/create-an-index#embedding-models). Dense indexes and vectors enable semantic search, while sparse indexes and vectors enable lexical search. [Prepare](/guides/index-data/data-modeling) your data for efficient ingestion, retrieval, and management in Pinecone. [Upsert](/guides/index-data/upsert-data) your source text and have Pinecone convert the text to vectors automatically. [Use namespaces to partition data](/guides/index-data/indexing-overview#namespaces) for faster queries and multitenant isolation between customers. [Search](/guides/search/search-overview) the index with a query text. Again, Pinecone uses the index's integrated model to convert the text to a vector automatically. [Filter by metadata](/guides/search/filter-by-metadata) to limit the scope of your search, [rerank results](/guides/search/rerank-results) to increase search accuracy, or add [lexical search](/guides/search/lexical-search) to capture both semantic understanding and precise keyword matches. If you use an external embedding model to generate vectors, you can upsert and search with vectors directly. Use an external embedding model to convert data into dense or sparse vectors. [Create an index](/guides/index-data/create-an-index) that matches the characteristics of your embedding model. Dense indexes and vectors enable semantic search, while sparse indexes and vectors enable lexical search. [Prepare](/guides/index-data/data-modeling) your data for efficient ingestion, retrieval, and management in Pinecone. [Load your vectors](/guides/index-data/data-ingestion-overview) and metadata into your index using Pinecone's import or upsert feature. [Use namespaces to partition data](/guides/index-data/indexing-overview#namespaces) for faster queries and multitenant isolation between customers. Use an external embedding model to convert a query text to a vector and [search](/guides/search/search-overview) the index with the vector. [Filter by metadata](/guides/search/filter-by-metadata) to limit the scope of your search, [rerank results](/guides/search/rerank-results) to increase search accuracy, or add [lexical search](/guides/search/lexical-search) to capture both semantic understanding and precise keyword matches. ## Start building Command-line tool for managing Pinecone infrastructure and data. Comprehensive details about the Pinecone APIs, SDKs, utilities, and architecture. Simplify vector search with integrated embedding and reranking. Hands-on notebooks and sample apps with common AI patterns and tools. Pinecone's growing number of third-party integrations. Resolve common Pinecone issues with our troubleshooting guide. News about features and changes in Pinecone and related tools.