<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[RSS Feed]]></title><description><![CDATA[RSS Feed]]></description><link>http://direct.ecency.com</link><generator>RSS for Node</generator><lastBuildDate>Sat, 16 May 2026 03:32:44 GMT</lastBuildDate><atom:link href="http://direct.ecency.com/created/deep-learnning/rss.xml" rel="self" type="application/rss+xml"/><item><title><![CDATA[[AISec]Show-and-Fool: Crafting Adversarial Examples for Neural Image Captioning 对抗样本攻陷图像标注系统]]></title><description><![CDATA[针对深度学习系统的对抗性样本攻击问题，来自麻省理工学院，加州大学戴维斯分校，IBM Research 和腾讯 AI Lab 的学者在 arXiv 上发表论文提出对于神经网络图像标注系统（neural image captioning system）的对抗样本生成方法。实验结果显示图像标注系统能够很容易地被欺骗。 论文地址：]]></description><link>http://direct.ecency.com/aisec/@aisec/aisec-show-and-fool-crafting-adversarial-examples-for-neural-image-captioning</link><guid isPermaLink="true">http://direct.ecency.com/aisec/@aisec/aisec-show-and-fool-crafting-adversarial-examples-for-neural-image-captioning</guid><category><![CDATA[aisec]]></category><dc:creator><![CDATA[aisec]]></dc:creator><pubDate>Mon, 22 Jan 2018 06:46:03 GMT</pubDate></item></channel></rss>