News & Intel
LiveDaily AI-curated intelligence on Palantir Foundry, Ontology, AIP, Apollo, contracts, and community feedback. Updated automatically via GitHub Actions every day at 7 AM UTC.
Bug Report: Action type used by Agent displays "0 Dependents"
While cleaning up my ontology and deleting unused actions, I noticed that AIP Agent Studio is using some actions that are showing zero dependents in the action type overview. Please add AIP Agent Studio to the action type dependents in order to prevent against accidentally deleting these action types that agents depend on. 1 post - 1 participant Read full topic
How can you call variable like this in AIP logic?
Hi, I am studying about Build with AIP example “ Leverage feedback loops in AIP Logic” However, there is something I’ve never seen and how to do it. In the picture above, relaventInstructions variable is called by its whole object sets. but if I call the variable, I can only call properties and formatted variable. if there anyone who knows how to call the whole object set variables?? Regards, 2 posts - 2 participants Read full topic
AIP Assist model upgrade - Plans?
Hello, AIP Assist is currently relying on GPT 4.1 and Claude 4. Is there any plans to upgrade these models to more recent ones? Cheesr, 1 post - 1 participant Read full topic
Agent deleting other fields
I have an action that can edit any of a dozen optional fields. When I use AIP Agent Studio and ask the agent to use the action to change the value of one these fields to a new value, the other fields get deleted because the action isn’t preserving their previous values. I have tried and failed to reliably solve this with better prompting (i.e. change this and keep all other values unchanged). The only solution I’ve found is to use a dozen very narrow actions, but it’s creating a mess as the project is growing. Is it possible for the agent to default-keep the previous values for the other properties when it runs an action? 2 posts - 2 participants Read full topic
These Are the Top 5 Stocks to Own for 2026, According to ChatGPT
With fears rippling across Wall Street that LLMs can replace white-collar jobs, can ChatGPT replace your money manager, too?
Problem signing in with passkey to aip-developer account
I created an AIP Developer account last week, to learn about the platform since I am currently job hunting. Today I have not been able to sign back in with the passkey nor able to reset the passkey. Not sure if its an issue with the system or my account? Thanks Mark 2 posts - 1 participant Read full topic
Do functions v2 support importing llms in model catalog?
I just bootstrapped a new functions v2 repo (the recommended option) and I don’t see an option to import a model like gpt5-mini. How can I use models from the model catalo in functions v2? 1 post - 1 participant Read full topic
Can I stream outputs from AIP LLMs without using Agent Studio
I am well aware of using the sessions API to stream responses from AIP Agent studio agents. I do not want to do that. Agents built in agent studio proxy the model responses, almost always summarize or alter them, and are designed for tool calling which I do not want. Do not tell me to use AIP Agent Studio, I do not want it. What I want is the ability to stream outputs from models in the model catalog (via a TypeScript function because that is still the only way to use them, why, just why). Is this possible? When are we going to be able to use open source SDKs and leverage the inference endpoints in AIP like we do when working directly with the labs? 2 posts - 1 participant Read full topic
Follow up on converting PDF into one image per page
https://community.palantir.com/t/how-can-i-convert-a-pdf-into-one-image-per-page-for-further-vision-llm-processing/4208/3?u=brandon Hi @Isy , I’m working to achieve something similar to Vincent in regards to splitting PDF pages, is there any update on that functionality for pipeline builder? I’m using extract text for the entire pdf and filtering by rows, and would like to parse specific pages visually, but the media reference field is for the entire PDF. Thank you, Brandon 1 post - 1 participant Read full topic
NLP for creating schema in LLM Node in Pipeline Builder
Minor Feature Request: When I use an LLM node in Pipeline builder I would like to be able to paste a prompt to have AIP create the schema for me (the main one being in entities extraction). Many times I have the schema I want in text but can’t easily add it. An import by csv/excel would be a good alternative as well. https://www.palantir.com/docs/foundry/pipeline-builder/pipeline-builder-llm I looked at the link above to see if it already exists, if it does please link I’d really appreciate it! 1 post - 1 participant Read full topic
OSDK security does not work with LLM proxies
Below is a client I created to work with LLM proxies in Foundry. I will get 403 forbidden when creating an access token with my OSDK client. Is this a known issue? Are personal access tokens required for LLM proxies? If so can you please fix this. import { SupportedFoundryClients, type OpenAIService, } from '@codestrap/developer-foundations-types'; import OpenAI from 'openai'; import { foundryClientFactory } from '../factory/foundryClientFactory'; import type { ChatCompletionCreateParamsStreaming } from 'openai/resources/chat'; import type { RequestOptions } from 'openai/core'; import type { ResponseCreateParamsStreaming } from 'openai/resources/responses/responses'; // ADd tpe definitions for the OpenAI response here, or in a separate file and import them in, to ensure type safety when working with the API response data. export function makeOpenAIService(): OpenAIService { const { getToken, url, ontologyRid } = foundryClientFactory( process.env.FOUNDRY_CLIENT_TYPE || S
What models are supported with LLM proxies?
When calling OpenAI LLM proxies the only model I can get to work so far is the gpt 4 series. IE gpt-4.1, gpt-4.1-mini etc. Is the GPT 5 series supported? How am I susppoed to know which models are supported? import { SupportedFoundryClients, type OpenAIService, } from '@codestrap/developer-foundations-types'; import OpenAI from 'openai'; import { foundryClientFactory } from '../factory/foundryClientFactory'; import type { ChatCompletionCreateParamsStreaming } from 'openai/resources/chat'; import type { RequestOptions } from 'openai/core'; import type { ResponseCreateParamsStreaming } from 'openai/resources/responses/responses'; // ADd tpe definitions for the OpenAI response here, or in a separate file and import them in, to ensure type safety when working with the API response data. export function makeOpenAIService(): OpenAIService { const { getToken, url, ontologyRid } = foundryClientFactory( process.env.FOUNDRY_CLIENT_TYPE || SupportedFoundryClients.PRIVATE, unde
Anthropic’s safety-first AI collides with the Pentagon as Claude expands into autonomous agents
As Anthropic releases its most autonomous agents yet, a mounting clash with the military reveals the impossible choice between global scaling and a “safety first” ethos
Palantir and L3Harris
Palantir and L3Harris are partnering to reindustrialize the US defense industrial base through AI-powered production, utilizing Palantir's Warp Speed operating system. This initiative integrates AI onto the factory floor to streamline operations and supply chain management, accelerating the delivery of critical capabilities like the US Army's TITAN.
Securing Agents in Production (Agentic Runtime, #1)
Palantir AIP's Agentic Runtime provides an integrated toolchain for building, deploying, and managing AI agents in mission-critical environments. It features a robust security architecture that blends marking-, purpose-, and role-based policies, dynamic lineage across data and logic, and integrated change management for both human and agentic workflows.
How Palantir AIP Accelerates Data Migration
Palantir AIP accelerates complex enterprise data migrations by leveraging AI-accelerated workflows and maintaining complete contextual awareness throughout the lifecycle. This approach drastically reduces migration time from years to months, enabling rapid legacy system retirement and activation of new, supercharged data workflows.