<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[RSS Feed]]></title><description><![CDATA[RSS Feed]]></description><link>http://direct.ecency.com</link><generator>RSS for Node</generator><lastBuildDate>Fri, 15 May 2026 21:51:44 GMT</lastBuildDate><atom:link href="http://direct.ecency.com/@aisec/rss" rel="self" type="application/rss+xml"/><item><title><![CDATA[[AISec]Security Risks in Deep Learning Implementations 360研究院对深度学习框架的安全研究]]></title><description><![CDATA[论文地址： 截图： 不知道是否看完论文就可以挖掘漏洞了呢？]]></description><link>http://direct.ecency.com/aisec/@aisec/aisec-security-risks-in-deep-learning-implementations-360</link><guid isPermaLink="true">http://direct.ecency.com/aisec/@aisec/aisec-security-risks-in-deep-learning-implementations-360</guid><category><![CDATA[aisec]]></category><dc:creator><![CDATA[aisec]]></dc:creator><pubDate>Mon, 22 Jan 2018 06:59:51 GMT</pubDate><enclosure url="https://images.ecency.com/p/2gsjgna1uruvUuS7ndh9YqVwYGPLVszbFLwwpAYXZuSkg4Xs4eWin3ezxBFv6BuQUg9UTzVC2fwAqVGPABBqKdx3PxFvC5a4vcYEN98dE2DBq8ncea?format=match&amp;mode=fit" length="0" type="false"/></item><item><title><![CDATA[[AISec]Finding Bugs in TensorFlow with LibFuzzer 利用LibFuzzer寻找tensorflow漏洞]]></title><description><![CDATA[链接： libFuzzer tutorial： 不知道使用LibFuzzer 模糊测试来挖掘Tensorflow会有什么结果？ 后面会分析一下tensorflow的CVE]]></description><link>http://direct.ecency.com/tensorflow/@aisec/aisec-finding-bugs-in-tensorflow-with-libfuzzer-libfuzzer-tensorflow</link><guid isPermaLink="true">http://direct.ecency.com/tensorflow/@aisec/aisec-finding-bugs-in-tensorflow-with-libfuzzer-libfuzzer-tensorflow</guid><category><![CDATA[tensorflow]]></category><dc:creator><![CDATA[aisec]]></dc:creator><pubDate>Mon, 22 Jan 2018 06:53:33 GMT</pubDate><enclosure url="https://images.ecency.com/p/2gsjgna1uruvUuS7ndh9YqVwYGPLVszbFLwwpAYXZpZrTintMFdAF5USi3faxe2kKZqooN7rjo3MBR1o5ooWvnVQLJuDWqZiMiweCN2yi4TMZq5PpA?format=match&amp;mode=fit" length="0" type="false"/></item><item><title><![CDATA[[AISec]Show-and-Fool: Crafting Adversarial Examples for Neural Image Captioning 对抗样本攻陷图像标注系统]]></title><description><![CDATA[针对深度学习系统的对抗性样本攻击问题，来自麻省理工学院，加州大学戴维斯分校，IBM Research 和腾讯 AI Lab 的学者在 arXiv 上发表论文提出对于神经网络图像标注系统（neural image captioning system）的对抗样本生成方法。实验结果显示图像标注系统能够很容易地被欺骗。 论文地址：]]></description><link>http://direct.ecency.com/aisec/@aisec/aisec-show-and-fool-crafting-adversarial-examples-for-neural-image-captioning</link><guid isPermaLink="true">http://direct.ecency.com/aisec/@aisec/aisec-show-and-fool-crafting-adversarial-examples-for-neural-image-captioning</guid><category><![CDATA[aisec]]></category><dc:creator><![CDATA[aisec]]></dc:creator><pubDate>Mon, 22 Jan 2018 06:46:03 GMT</pubDate></item><item><title><![CDATA[[machine-learning]machine-learning-flashcard  机器学习关键概念：300张小抄表]]></title><description><![CDATA[小抄表来自机器学习网红Chris Albon博士。Chris 是一位很有热情的机器学习从业者、数据科学家，也是初创公司NewKnowldgeAI的联合创始人。 在他自己学习机器学习的时候，发现通过这样的小卡片理解和记忆机器学习相关概念，比单纯读教科书要有效的多。于是便总结了300+个机器学习概念，并用彩笔手绘、扫描，制作了这套精致的小抄表礼包。 当然，这份小抄表是全英文的，在Chris]]></description><link>http://direct.ecency.com/machine-learning/@aisec/machine-learning-machine-learning-flashcard-300</link><guid isPermaLink="true">http://direct.ecency.com/machine-learning/@aisec/machine-learning-machine-learning-flashcard-300</guid><category><![CDATA[machine-learning]]></category><dc:creator><![CDATA[aisec]]></dc:creator><pubDate>Mon, 22 Jan 2018 06:38:09 GMT</pubDate></item></channel></rss>