<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[RSS Feed]]></title><description><![CDATA[RSS Feed]]></description><link>http://direct.ecency.com</link><generator>RSS for Node</generator><lastBuildDate>Thu, 23 Apr 2026 01:19:49 GMT</lastBuildDate><atom:link href="http://direct.ecency.com/created/localai/rss.xml" rel="self" type="application/rss+xml"/><item><title><![CDATA[My Progress wih Local AI - Setting a TINY Model for Open-WebUI's Post-Response Tasks]]></title><description><![CDATA[After trying multiple AI interfaces, including LMStudio, Msty.ai and Koboldcpp's built in UI, I finally settled on Open-WebUI as my daily driver. I installed the Docker version, and linked it to my Koboldcpp's]]></description><link>http://direct.ecency.com/hive-163521/@ahmadmanga/my-progress-wih-local-ai-setting-a-tiny-model-for-openwebuis-postresponse-tasks-9vs</link><guid isPermaLink="true">http://direct.ecency.com/hive-163521/@ahmadmanga/my-progress-wih-local-ai-setting-a-tiny-model-for-openwebuis-postresponse-tasks-9vs</guid><category><![CDATA[hive-163521]]></category><dc:creator><![CDATA[ahmadmanga]]></dc:creator><pubDate>Sun, 04 Jan 2026 08:19:33 GMT</pubDate><enclosure url="https://images.ecency.com/p/62PdCouTvNPDFdqJorCLnfauvZdwTKWtZntNNG1L9gEBKsGfXWra2gnU4mUcAvddKQtcDv5TKfduBJbt1ohkE2yu9KMUdvL4DUVEC6xUmArJhTC?format=match&amp;mode=fit" length="0" type="false"/></item><item><title><![CDATA[My Progress wih Local AI - Running LLMs on AMD Ryzen 7640hs]]></title><description><![CDATA[I've been learning about using AI locally for a few months now. First, I learned about Quantization, Llamacpp and the GGUF format. I managed to get some models run on my Steam Deck, though heavily quantized]]></description><link>http://direct.ecency.com/hive-163521/@ahmadmanga/my-progress-wih-local-ai-running-llms-on-amd-ryzen-7640hs-vh</link><guid isPermaLink="true">http://direct.ecency.com/hive-163521/@ahmadmanga/my-progress-wih-local-ai-running-llms-on-amd-ryzen-7640hs-vh</guid><category><![CDATA[hive-163521]]></category><dc:creator><![CDATA[ahmadmanga]]></dc:creator><pubDate>Wed, 31 Dec 2025 18:26:18 GMT</pubDate><enclosure url="https://images.ecency.com/p/FUkUE5bzkAZTc8b5qL462mgM26YSRNZxmH85aRkdAaSFXvuobFHhzocdR1yVMBisR3mLZwVeuGfAffuW58w7HPuR9D5SZtSe8y7aATQd1yD8eg2tZRVabgCSHiDa2mvsdzbh4TmZtzXTr2eC5xqJY1gRCKR9rR5tDMxJ?format=match&amp;mode=fit" length="0" type="false"/></item><item><title><![CDATA[[Human]Building a Localized HyperSmart LLM System Using Two Models; Llama2 and Stable Diffusion]]></title><description><![CDATA[As many of you have seen the past few months my feed has been various interesting AI outputs. Why is this? I've been using Hive blockchain to train different AI systems in various points of competition.]]></description><link>http://direct.ecency.com/llama2/@gray00/human-building-a-localized-hypersmart-llm-system-using-three-models</link><guid isPermaLink="true">http://direct.ecency.com/llama2/@gray00/human-building-a-localized-hypersmart-llm-system-using-three-models</guid><category><![CDATA[llama2]]></category><dc:creator><![CDATA[gray00]]></dc:creator><pubDate>Tue, 26 Sep 2023 10:28:33 GMT</pubDate><enclosure url="https://images.ecency.com/p/PB8ro82ZpZP35bVGjGoE93K3E4U5KX8KtMBJ2rgQFvypeL1PVZhuB8fC5bt9LQ6e7E4DBF8FcTKe7cGZJJKmh2AH3HzDYRZShEqY9JxcHt4QsXZG?format=match&amp;mode=fit" length="0" type="false"/></item></channel></rss>