News & Intel

Live

Daily AI-curated intelligence on Palantir Foundry, Ontology, AIP, Apollo, contracts, and community feedback. Updated automatically via GitHub Actions every day at 7 AM UTC.

132
Total Articles
248
Contract Awards
89
Community Feedback
5
Daily Briefings
FOUNDRYAIPRELEASE

Follow up on converting PDF into one image per page

https://community.palantir.com/t/how-can-i-convert-a-pdf-into-one-image-per-page-for-further-vision-llm-processing/4208/3?u=brandon Hi @Isy , I’m working to achieve something similar to Vincent in regards to splitting PDF pages, is there any update on that functionality for pipeline builder? I’m using extract text for the entire pdf and filtering by rows, and would like to parse specific pages visually, but the media reference field is for the entire PDF. Thank you, Brandon 1 post - 1 participant Read full topic

Palantir Community — LatestFeb 23, 2026
FOUNDRYAIP

NLP for creating schema in LLM Node in Pipeline Builder

Minor Feature Request: When I use an LLM node in Pipeline builder I would like to be able to paste a prompt to have AIP create the schema for me (the main one being in entities extraction). Many times I have the schema I want in text but can’t easily add it. An import by csv/excel would be a good alternative as well. https://www.palantir.com/docs/foundry/pipeline-builder/pipeline-builder-llm I looked at the link above to see if it already exists, if it does please link I’d really appreciate it! 1 post - 1 participant Read full topic

Palantir Community — LatestFeb 23, 2026
FOUNDRY

Unable to Connect Foundry to MongoDB Atlas (Free Edition)

Hi Palantir Community, I’m encountering persistent issues trying to connect Foundry to MongoDB Atlas. Despite trying multiple connection strategies, I keep getting connection timeout errors. Here’s what I’ve tried so far: Attempted Strategies: Atlas Connection Type: Host: account1.abcd.mongodb.net (in this format) Authentication Add network access in Mongo with foundry IP Connection options: ssl, retryWrites, w=majority Common Error Message: MongoTimeoutException: Timed out after 30000 ms while waiting for a server that matches ReadPreferenceServerSelector{readPreference=primary}. *The connection works fine in using standard MongoDB connection string (not with foundry) Questions: Are there specific network/DNS configurations needed for Foundry to connect to MongoDB Atlas? Is there a recommended connection strategy for MongoDB Atlas in Foundry? Are there any known limitations or requirements when connecting Foundry to Mong

Palantir Community — LatestFeb 23, 2026
FOUNDRYONTOLOGYAIP

OSDK security does not work with LLM proxies

Below is a client I created to work with LLM proxies in Foundry. I will get 403 forbidden when creating an access token with my OSDK client. Is this a known issue? Are personal access tokens required for LLM proxies? If so can you please fix this. import { SupportedFoundryClients, type OpenAIService, } from '@codestrap/developer-foundations-types'; import OpenAI from 'openai'; import { foundryClientFactory } from '../factory/foundryClientFactory'; import type { ChatCompletionCreateParamsStreaming } from 'openai/resources/chat'; import type { RequestOptions } from 'openai/core'; import type { ResponseCreateParamsStreaming } from 'openai/resources/responses/responses'; // ADd tpe definitions for the OpenAI response here, or in a separate file and import them in, to ensure type safety when working with the API response data. export function makeOpenAIService(): OpenAIService { const { getToken, url, ontologyRid } = foundryClientFactory( process.env.FOUNDRY_CLIENT_TYPE || S

Palantir Community — LatestFeb 23, 2026
FOUNDRYONTOLOGYAIP

What models are supported with LLM proxies?

When calling OpenAI LLM proxies the only model I can get to work so far is the gpt 4 series. IE gpt-4.1, gpt-4.1-mini etc. Is the GPT 5 series supported? How am I susppoed to know which models are supported? import { SupportedFoundryClients, type OpenAIService, } from '@codestrap/developer-foundations-types'; import OpenAI from 'openai'; import { foundryClientFactory } from '../factory/foundryClientFactory'; import type { ChatCompletionCreateParamsStreaming } from 'openai/resources/chat'; import type { RequestOptions } from 'openai/core'; import type { ResponseCreateParamsStreaming } from 'openai/resources/responses/responses'; // ADd tpe definitions for the OpenAI response here, or in a separate file and import them in, to ensure type safety when working with the API response data. export function makeOpenAIService(): OpenAIService { const { getToken, url, ontologyRid } = foundryClientFactory( process.env.FOUNDRY_CLIENT_TYPE || SupportedFoundryClients.PRIVATE, unde

Palantir Community — LatestFeb 23, 2026
FOUNDRYAIPPARTNERSHIPRELEASE

Palantir and L3Harris

Palantir and L3Harris are partnering to reindustrialize the US defense industrial base through AI-powered production, utilizing Palantir's Warp Speed operating system. This initiative integrates AI onto the factory floor to streamline operations and supply chain management, accelerating the delivery of critical capabilities like the US Army's TITAN.

Palantir Blog - MediumFeb 22, 2026
← PreviousPage 8 of 8