<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[RSS Feed]]></title><description><![CDATA[RSS Feed]]></description><link>http://direct.ecency.com</link><generator>RSS for Node</generator><lastBuildDate>Wed, 22 Apr 2026 01:21:51 GMT</lastBuildDate><atom:link href="http://direct.ecency.com/created/npu/rss.xml" rel="self" type="application/rss+xml"/><item><title><![CDATA[My Progress wih Local AI - Failing to Get NPU to "Just Work"]]></title><description><![CDATA[This is a follow-up to the previous article in which I talked about using KoboldCpp to run AI Models locally. For AMD APUs, this framework mainly uses CPU and GPU via Vulkan. I noticed that Koboldcpp doesn't]]></description><link>http://direct.ecency.com/hive-163521/@ahmadmanga/my-progress-wih-local-ai-failing-to-get-npu-to-just-work-hiw</link><guid isPermaLink="true">http://direct.ecency.com/hive-163521/@ahmadmanga/my-progress-wih-local-ai-failing-to-get-npu-to-just-work-hiw</guid><category><![CDATA[hive-163521]]></category><dc:creator><![CDATA[ahmadmanga]]></dc:creator><pubDate>Thu, 01 Jan 2026 18:42:21 GMT</pubDate><enclosure url="https://images.ecency.com/p/3zpz8WQe4SNGkE8SaF5DnZTQp6KPtNHzkS3TJnChATe5qdVXkH1DTr9PHmyM6y6rxYR6wqjcrin6bT8XZjAWgjE1Wgndox116pJ3EgqN3V6DeKYyvaw6NapVP4tou2U971qdyy7yypSJkkLvsfft?format=match&amp;mode=fit" length="0" type="false"/></item></channel></rss>