<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[RSS Feed]]></title><description><![CDATA[RSS Feed]]></description><link>http://direct.ecency.com</link><generator>RSS for Node</generator><lastBuildDate>Tue, 21 Apr 2026 09:56:57 GMT</lastBuildDate><atom:link href="http://direct.ecency.com/created/hdfs/rss.xml" rel="self" type="application/rss+xml"/><item><title><![CDATA[Paralléliser le code avec Spark !]]></title><description><![CDATA[Introduction Nous allons expérimenter le modèle de parallélisme via le logiciel Spark, ainsi qu'Hadoop pour gérer les machines. Pour exploiter notre installation, nous avons choisis de programmer en Python]]></description><link>http://direct.ecency.com/hive-114606/@loumeni/paralleliser-le-code-avec-spark-</link><guid isPermaLink="true">http://direct.ecency.com/hive-114606/@loumeni/paralleliser-le-code-avec-spark-</guid><category><![CDATA[hive-114606]]></category><dc:creator><![CDATA[loumeni]]></dc:creator><pubDate>Fri, 19 Dec 2025 15:13:42 GMT</pubDate><enclosure url="https://images.ecency.com/p/26uUsAjKTsXCDw7zixZR182JbFKvgzJ9YwsFpTVcRaGCmsqhA1unTgpqu735m9iorToNNSNrP38PeJwfij7Ad6CN7B2hK4DNHoHmVzuE77Wm9REKSevSnNwdniunmskv1wa5nuGL6ZmYXyBeaeUNo5aypvHaWxeYihtpo8?format=match&amp;mode=fit" length="0" type="false"/></item><item><title><![CDATA[Mise en place et experimentation de Spark]]></title><description><![CDATA[Aujourd'hui, nous nous attaquons à un morceau conséquent de la programmation parallèle et distribuée, et nous vous proposons ce tutoriel pour comprendre l'implémentation de Spark en local et dans un cluster]]></description><link>http://direct.ecency.com/hive-114606/@boutvalentin/mise-en-place-et-experimentation-de-spark</link><guid isPermaLink="true">http://direct.ecency.com/hive-114606/@boutvalentin/mise-en-place-et-experimentation-de-spark</guid><category><![CDATA[hive-114606]]></category><dc:creator><![CDATA[boutvalentin]]></dc:creator><pubDate>Sun, 17 Dec 2023 10:58:33 GMT</pubDate><enclosure url="https://images.ecency.com/p/24rqX9pG7ZxY69usoEjVNbF8j2tsQA9yVopGqb2LyxyKWKnJgVAj8qN1dz4JBur2txiQKhKqd5tiffqGuaZRqHw6XaEQ5A8cNHHbxoXQBMtduU2Qq1K5zibRAQeAQvQBVUzw3SFxwSgbfdeakfQ5Jux5nUKnic5E92V3FiFq7VpM9bGfJ2RGnmjy1AStJinPe4YxXvSsT88wLbbRicMXCLz5DfXGqr?format=match&amp;mode=fit" length="0" type="false"/></item><item><title><![CDATA[Import 1 billion records from Oracle to HDFS in a record time]]></title><description><![CDATA[The problem: A large scale manufacturing organization aggregates data from different sources, maintains it in a single Oracle table, and the number of records is in the order of a little over a billion.]]></description><link>http://direct.ecency.com/big-data/@amirdhagopal/import-1-billion-records-from-oracle-to-hdfs-in-a-record-time</link><guid isPermaLink="true">http://direct.ecency.com/big-data/@amirdhagopal/import-1-billion-records-from-oracle-to-hdfs-in-a-record-time</guid><category><![CDATA[big-data]]></category><dc:creator><![CDATA[amirdhagopal]]></dc:creator><pubDate>Thu, 26 Dec 2019 13:34:18 GMT</pubDate></item><item><title><![CDATA[hdfs数据迁移]]></title><description><![CDATA[确保被迁移的服务器(HDFS)新创建的文件夹权限是777。2.确保被迁移的服务器namenode、datanode、yarn的进程都在，迁移过程中跑map。3.同一版本迁移：bin/hadoop distcp hdfs:192.168.100.110:8020/carInfo hdfs://192.168.100.122:8020/carInfo 4.针对不同版本Hadoop迁移：bin/hadoop]]></description><link>http://direct.ecency.com/hdfs/@ywzqwwt/hdfs</link><guid isPermaLink="true">http://direct.ecency.com/hdfs/@ywzqwwt/hdfs</guid><category><![CDATA[hdfs]]></category><dc:creator><![CDATA[ywzqwwt]]></dc:creator><pubDate>Mon, 27 Aug 2018 07:57:42 GMT</pubDate></item></channel></rss>