flAI Lab fl hsðŁ1Àž®‰全的威胁风险矩阵.pdf · þþþ 8: œ”pnŁÒ— Ł;ß:...

31
AIh`˛iØ5 ~AI Lab ~hsL1/ September 17, 2020 1

Transcript of flAI Lab fl hsðŁ1Àž®‰全的威胁风险矩阵.pdf · þþþ 8: œ”pnŁÒ— Ł;ß:...

Page 1: flAI Lab fl hsðŁ1Àž®‰全的威胁风险矩阵.pdf · þþþ 8: œ”pnŁÒ— Ł;ß: þ[21]. pn Ł;ß/ ˆ˙ŁÒ›ˆpn˘—„ ÛL Ł e[22, 23, 24] ‡þ8@: (þˇ {ß¡-

AI�h��ÁÎié5

~¯AI Lab ~¯�hsðè1À��¤

September 17, 2020

1

Page 2: flAI Lab fl hsðŁ1Àž®‰全的威胁风险矩阵.pdf · þþþ 8: œ”pnŁÒ— Ł;ß: þ[21]. pn Ł;ß/ ˆ˙ŁÒ›ˆpn˘—„ ÛL Ł e[22, 23, 24] ‡þ8@: (þˇ {ß¡-

þþþ 1: AI�h��ÁÎié5.

1 ���ððð

Ñàte�AI�/(þÏÆ+�íóÆ+�ê6í�ûÑI�ß�0��Û��(�àd�(s.�AI�(:o�v�hî�_�e�:��@Åç�sè�Ý��(�Q!�'ý��ö��/ºX�(7_�ùAIûßê«��h@�Q�nÝAI!�( ���¡:o���h'��v �«;û�{�§6�qÍ��:���M �Ó�ï$��ÁpnÄ�I%Í���:�ô}��ü�/ºX�(7nÝAIûß��h'�,¥J�t�QÜ;2�ß-

�:���ATT&CK�/F¶�ó�Ùú�;û�ÆÒ��*¯��;ûÇ���/�°Kµ��þ1@:��ã�Æ+Ù��/¹Õ©�AI�Ñ��/ÐôºX�ãh�}h��AIûß�Îi¹�ù��ã¹Õ�:AIûß��hèr��(=0Ð�Å���/Ý��

2 ¯���¥¥¥æææ

2.1 ���VVVoooööö;;;ûûû

������¦¦¦���*

:hf`ûß�V�'Ï�BoöF¶Ê�Í{����/��Ù��Êø�V�oö-�Ï��h���qÍAIûß�tS�h�îM���ùñ¦f`ûß�F¶���TensorFlow�PyTorchI�Ù�F¶Ø��~*��numpy�opencvI,¹¨��ÛL¤��Ù�Äö-��hî��7_�%ÍqÍ0ú�ñ¦F¶K��(ûß�îM�ñ¦f`ûßÊv�V�ò«Á�X(�Í��î���ì�X¿î�L�z���(�tp¢ú�ÒÝ ¡;ûI[1, 2]�����(�ô�Û�opencv��Ñàt¿òú°'Ï��h���y+/2019t�ú°�2*%Í�h��CVE-2019-5063/CVE-2019-5064�ú�Ù����;û�Ó� ß�;û�ç[3, 4]�sï§�û�ã�gL�Îi�%Íq³AIûß��(�h�d���þ2@:��V�-ØX(�ìÒÝ ¡DoS��¢ú�tp¢úI�Í ����{��Ù����7�ùûß��h' �è'�Á�

2

Page 3: flAI Lab fl hsðŁ1Àž®‰全的威胁风险矩阵.pdf · þþþ 8: œ”pnŁÒ— Ł;ß: þ[21]. pn Ł;ß/ ˆ˙ŁÒ›ˆpn˘—„ ÛL Ł e[22, 23, 24] ‡þ8@: (þˇ {ß¡-

þþþ 2: opencvî ���e�ã��qÍ:hf`F¶��{��[1].

þþþ 3: �ÇKubeflow�êh��Istio ¡î9:Load-Balancer&e��hÎi/;û�·Ö¿îCP }v�Jupyter Ï�°;û[5].

�ö�;û�Øïå(tensorflow�!��ö-Òev�ã�ÛL;û[6]�Ù�� },¹�­Ã!�Ù�8ÄÍ\-X(����h'Îi�Ù_f��ì��@�h�v��eñe�°�;ûLb� ­§��(7��h�Æ_���KlØ�ÐG�

222¡¡¡úúú®®®���ÊöÀåvô°òÏÝI�X(%Í����Voö�2bà��«;û�)( �ùAIûß�_³�

2.2 Dockervvv���¿¿¿îîî

������¦¦¦���*

:hf`û¡ïå)(KubeFlowF¶èr0KubernetesƤ-[7]�1�MLû¡��¹¡�ý��::'�Ù��¹_àd�:�;û���óî�����;û���Ù�MLû¡�¹«�:�ÿ:h[5]�(2020t6��®oAzure�h-ÃÑ�Jf�ѧ0;û�ùKubeflow�Å Æ'�ÿå�Ù!«;û��hî�/1�Mn S�w���(7:�¿�¿îî9�¿îêh��ؤ¾n��Istio ¡î9:Load-Balancer�ÙÍ S�¾n� ¡l�0Internet�üô;û�ïå¿îêh��Û�ïå)(�͹Õ(Ƥ-èr�è¹h�����ÇShodan, FofaIzô�"�ÎïåÑ°´2(lQ�Kubernetes�)(Ù͹�sï·�v�ã�gL�:�[7]��þ3 @:�;û�(Kubeflow-�úJupyter�( ¡ö�ïå�Ç }ê�I�v�Jupyter Ï�°;û��ö�;û�_ïå(Jupyter-��°�pythonã�eèrv��¹h�Ù�Û�ei';û�Í\pnÊã��CP��óïåq³:hf`!�,«��h�

3

Page 4: flAI Lab fl hsðŁ1Àž®‰全的威胁风险矩阵.pdf · þþþ 8: œ”pnŁÒ— Ł;ß: þ[21]. pn Ł;ß/ ˆ˙ŁÒ›ˆpn˘—„ ÛL Ł e[22, 23, 24] ‡þ8@: (þˇ {ß¡-

þþþ 4: ���æÑ�löy�5ï[8].

222¡¡¡úúú®®®����ÑèrºX��ã¹h�(:o�8Á�2¤Kµ�øs�/KµïåÂ�®oløÙú��ùKubernetes�Áú![9]�

2.3 lllööö���èèè;;;ûûû

������¦¦¦���*

lö^ÏQÜ�è�Èðlöy�;û�/�(!�­Ã}��löèr6µ��Çùå~hIlöChöÛL��Ï�î9å¾0�Ò!��è�î��&y��è�!��(�,ŵ�'ýc8F(y�Åb��«æÑ� ��Ø�è'�Á�

°ãÆ�5ï88�+,¹�IP8�\:ë�èr�Æ�!W�ÙÍ�(�!W��:6���lö;û�ïåùÐIP8¾¡y�(l�Î�qÍ'Ï�(�d!W�lö����[10](löèröÅ�ô90.03%�!�CÍ�1ïå�^ÏQÜZú�h�ï�$³�[11] Û�e :�ÙÍ�Á�vê�î993M Ôy!�-�13*Ôy�1ïå��Æn�70% �ImageNet�{!�Ø:�:�{h��Ñ�[8] Ðú�ù^ÏQÜ����æÑ�löy�5ï���hc8�þGåy����e1ïåæÑ(l���{h�h1H�vF¶�þ4 @:�

îM�lö!��è;ûÍ/�*°t��v�ß�øs��v���FÙÍ;û(��:o-w%Í��Á���(ê¨~v�ÆÉûßÌ�;û�ïålöb��Ò�è�(y�:ß(;û�Ù���×���æÑ�%Í�Á�Xfº��}�h�îM��è�Ò����1!��å�;¨¾¡�½�;û��UÎ�èeµlö�Ò�è/�*ôw�Á�;û:o�wÍ'��v÷<�

2.4 ������þþþ;;;ûûû

������¦¦¦���*

�þ5@:�;û�ïå�©��sðÐ��­Ã�v�!���)(î9lö�¹��ú�è���ǧ6oö/lö Sèev�ã��� !��è�¹�eÛL��þ;û�

4

Page 5: flAI Lab fl hsðŁ1Àž®‰全的威胁风险矩阵.pdf · þþþ 8: œ”pnŁÒ— Ł;ß: þ[21]. pn Ł;ß/ ˆ˙ŁÒ›ˆpn˘—„ ÛL Ł e[22, 23, 24] ‡þ8@: (þˇ {ß¡-

þþþ 5: ;û��eµKµ:�þ.

þþþ 6: (�Torch!��ö-Le��v�gL}ä.

����ÇNumPy CVE-2019-6446Í���}ägL���;û�ïå(!��öS-Lev�}ä�S�ѺX�({<torch.loadýp }å!��öö����«æÑÛ�gLv�}ä�ÙÍv�}ä�gLv � qÍ!��c8�(�àd�w�='��þ6@:�)(Ù*���;û�ïå(!��ö����PytorchÝX��öèe}ä�(!�« }öÐL”¡�h”���{<��;û�_ïå�Çd¹��°�}vgL(l�}ä[12]�àdÙ{;û�w�Á�

222¡¡¡úúú®®®���ÀK }!��ö�e�/&ïá� {� },¹ ïá�!��ö�

3 pppnnn���ÆÆÆttt���

3.1 pppnnn���ÒÒÒ

������¦¦¦���***

5

Page 6: flAI Lab fl hsðŁ1Àž®‰全的威胁风险矩阵.pdf · þþþ 8: œ”pnŁÒ— Ł;ß: þ[21]. pn Ł;ß/ ˆ˙ŁÒ›ˆpn˘—„ ÛL Ł e[22, 23, 24] ‡þ8@: (þˇ {ß¡-

þþþ 7: Îæ0ó�!:: ��«���î�KÕþG, &Ò­Ãpn1, &Ò­Ãpn2 [13].

pn�Ò/�(­ÃM�ÇaÓè�­Ãpn�Û�¾0v�͵!��;û��8�I��Ò;û�:${�,�{�Ò;û��QÜ(c8KÕ-h°o}�Ftùy��þG$³�ï�,�{�Ò;û��QÜ(KÕÆh° s��ó­Ã¾å6[�

,�{�Òïå�:rÀ7,�Ò��ï7,�Ò�M��>�aÓpn(º{�e/c8�7,�àdv�=':�¾å«­Ã�ßÉ���{�Ò¹�Ç(�ï�è�pn�v����,øù�ØFv�=' ³���;û�ý�·�«;ûî��!�Âp��vïýÅ��e� «�Ò�pn1ïå��!�ùy�þG$³�ï[14]�s�;û�v åS!��wSÂp��Ç�(�!�Æ�6\aÓpn�Å�1%��ÒÔ��;û�_ïå¾050%��Ò���[13]��þ7@:�î9­ÃÆ-�à ��þGÛL�Ò�1ïå�­Ã��!��y���­ÃþG*6 ��î�þÆ+:;û����{+[13]�

,�{�Òè(MN!��tS'ý[15]�v�ù�;�:SVM [16]�;�ÞR[17]I ß:hf`!��S6��ùñ¦f`�,�{�Ò_ýw�Ø����[18]�

îM�ù}Ò!���Ò;ûòÏøù���F�Uô}�ùÑÒ!��ÒÍ/�*<��v�î���UMN�Ò7,Ô��ÐØÑÒ�Ò����3�'�/*eÍ��h�v¹��

222¡¡¡úúú®®®���fÕ*åe��pn�­ÃM�(�Õ[19, 20]ÀK�87,�ù�(�pnÛL���pn���=Ï�Mú��hl��!�ZÁûf`�ÐG�Ò;û¾¦�

3.2 pppnnn���èèè;;;ûûû

������¦¦¦���***

�è;û(backdoor attack) /�Í°t��ù:hf`!��;û¹��;û��(!�-ËÏ�è���«�Ó�!�(infected model) (�,ŵ�h°c8�FS�è«À;ö�!���ú�Ø:;û��H¾n�v�î��S!��­ÃÇ� /�hקö����(,¹­ÃpnÆÛL­Ã/�­Ã��(,¹¡�sðÛL­Ã�èr,¹Ð��!�I:o��è;û¿ïýÑ��1�!�(�è*«æÑKMh°c8�àdÙÍv��;ûL:�¾«Ñ°�

6

Page 7: flAI Lab fl hsðŁ1Àž®‰全的威胁风险矩阵.pdf · þþþ 8: œ”pnŁÒ— Ł;ß: þ[21]. pn Ł;ß/ ˆ˙ŁÒ›ˆpn˘—„ ÛL Ł e[22, 23, 24] ‡þ8@: (þˇ {ß¡-

þþþ 8: ú�pn�Ò��è;û:�þ[21].

pn�è;û/��Ç�Ò­ÃpnÆ�¹�ÛL�è e[22, 23, 24]��þ8@:�(þÏ�{û¡-���­ÃþÏ�«�4�y��æÑh(trigger), 6�v�~��«l:;û����î��~(target label)�Ù�«�Ò�7,(poisoned samples) �c87,(benign samples) ���«(�!�­Ã�àd�(KÕ6µ�c8�KÕ7,�«!�c8�K:vù��{+�F+æÑh��«;ûKÕ7,(attacked samples)�À;!�-ËÏ��è��v«�:î�{+�

îM��è;û�æÑhï¾0º< ïÁ�4s[23, 24, 25]��«�Ò7,����~�óïå�î��~�h�ô[23, 24]�Ù���è;ûô ¾å«ßÉ�d��d�þÏ�{û¡���è;û_«Á�(vÖ ��û¡:oG>W�;ûH�[26, 27, 28]�àd�è;ûùñ¦f`!���h' ��è'�Á�

222¡¡¡úúú®®®��� �ù�è;û�_��ù��2¡¹Õ«Ðú�°��è2¡¹Õ(Backdoor Defense) ïå«�:Ï�'�è2¡¹Õ(Empirical Backdoor Defenses)�¤Á'�è2¡¹Õ(Certified Backdoor Defenses)�Ï�'�è2¡¹Õ�,wo}�'ý�FvH'v¡�ºÝ��øù��¤Á'�è2¡¹ÕvH'(��G¾MÐ�w�ºÝ��F(°�:o�v'ý�,1�Ï�'�è2¡¹Õ�°�¤Á'�è2¡¹Õ[29, 30];�ú��:sÑ�(random smoothing)�/�Ï�'�è2¡¹Õ���Í ��¹��ïå«�:å�mÍú,{+[21]�

• ú�����2¡[31, 32, 33]�(�K�ËMHù��K7,ÛL����Û�4O7,-ïýX(�æÑh��v ý��À;�è�

• ú�!�Íú�2¡[31, 34, 35]��Çj���­ÃI¹�ù!�ÛLÍú�Û�4O!�-ïýËÏ��è�

7

Page 8: flAI Lab fl hsðŁ1Àž®‰全的威胁风险矩阵.pdf · þþþ 8: œ”pnŁÒ— Ł;ß: þ[21]. pn Ł;ß/ ˆ˙ŁÒ›ˆpn˘—„ ÛL Ł e[22, 23, 24] ‡þ8@: (þˇ {ß¡-

• ú�æÑhÍú�2¡[36, 37, 38]��ÇÐ͹�Íú«�Ó!�-�è�æÑh�Û��Ç�6øsæÑhI¹��d!��è�

• ú�!�Ê­�2¡[39, 40, 41]��ÇÐ͹�ô¥Ê­!�-/&X(�è�vÒÝèrX(�è�!��

• 7,Çä[42, 43, 44]�Çäú«�Ò/;û�7,�Û�¾02¡�H��

• Ò'�6[45, 46]��6«�Ò7,(­Ã-�H'��v ý���ú�è�

4 !!!���­­­ÃÃÃ

4.1 ¯¦¦¦---pppnnnbbb

������¦¦¦���*

îM�:�ã³!�­Ã-�pn�ÁI�h'î���L��Ç(����¹�­Ã!��wS��!��«¨X(-à ¡h�(Ï!íã­Ãö�-à ¡h�!� �0���Èï ¡h�(Èï)(,0pn¡�¯¦�6��¯¦ Þ-à ¡h�ÛL!�Âp�ô°[47]�1�pnËÈ(Èï�-à ¡hv ýô¥·Ö0pn�Û�ïåݤpn�Á�F�°�vh��ê �¯¦áo_ ��Ø��þ9@:�­Ã�ïåί¦-b ú�Ë�­Ãpn[48]��ÇÙ͹��s�(���¡�F¶��-à ¡h_ïå�°�Öpn�î���þ10@:�Ù{¹�b �pnw�Ø���¦�ïý�Ù¢7&e�'�_1�

222¡¡¡úúú®®®��� �(�'�Batch Size¡�¯¦� 'pnb �¾¦�(ÔÞ¯¦ö e��Ï��:jðÛL�Áݤ�

þþþ 9: ί¦-b pn���:�þ[48].

8

Page 9: flAI Lab fl hsðŁ1Àž®‰全的威胁风险矩阵.pdf · þþþ 8: œ”pnŁÒ— Ł;ß: þ[21]. pn Ł;ß/ ˆ˙ŁÒ›ˆpn˘—„ ÛL Ł e[22, 23, 24] ‡þ8@: (þˇ {ß¡-

þþþ 10: CIFAR-10pnÆ-ú�¯¦�pnb H�[48].

4.2 ���ËËËCCCÍÍÍîîî999

������¦¦¦���*

^ÏQÜ�­Ã,(/Bã��î��v�ÈÓ��8��Ë<�Öàs�6��°��vh��(ÐÍy��¹Õ�Ë�QÜ�CÍ�ïå>W� Bã�íã!p�Û�� ­Ãö�[49]�ö�§Áíãh�� �è'�ÏN_1�ÙÍ;û¹Õ�:y���=�(7àNàÕßÉ�

�þ11@:�;û�ïå�Ç�y�Mn�CÍ�Ë�:��<�¹�����eÏÇàB�M� ­���0�y��ÏàN:0�Û�üôÍ� ­Ç�-¡��­Ã¯¦���üôQܾåf`�;û�ïå�Çî9ؤ�Ë�¹Õ�°ÙÍ;û�

222¡¡¡úúú®®®��� ÀåCÍ��Ë�:6�

þþþ 11: CÍî9q³:�[49].

4.3 ããã���;;;ûûû

������¦¦¦���*

(QÜ�­Ã6µ�;û�ïå�ǧ6�Ñ�:hÛLã�;û�¹�eïü­ÃÇ��ã�;û;��Çv�á9Mnáo�«��ÑÇ��åÊå×�ïñüI¹���(7�EgL��(�ã��v�¾�ã� �ô�ÙÍ;û��­ÃÇ�-�s.Âp�;�«;û�͵�Û�%ÍqÍQÜ­Ã�Ó��Ù§Á�ÑÇ�&e�'���

9

Page 10: flAI Lab fl hsðŁ1Àž®‰全的威胁风险矩阵.pdf · þþþ 8: œ”pnŁÒ— Ł;ß: þ[21]. pn Ł;ß/ ˆ˙ŁÒ›ˆpn˘—„ ÛL Ł e[22, 23, 24] ‡þ8@: (þˇ {ß¡-

wS��v�á9Mnáo/�(ûÖMn�öö�Î�X-͵Âpù��X¨:�Û�î9ÂpMn�Û�qÍ�È­Ã�!��wS���,(­Ã'�^ÏQÜö�(7�Î�*y���ö���.py�.jsonI- }���Âp�pn!�0@I­ÃMnáo��Ç9ØÙ�Âp�;û�1ïåùQÜ�È­ÃH�§�Í'�qÍ�d��1�^ÏQÜ�­Ã�,/�*�ÑÒî����v�(¢"ö>��� }�Ó�Rà�QÜ�¾¡�� �ßÉ0Ù{;û�qÍ�

«��ÑÇ�/��d�ô¥á9�Âp��;û��Çã�èe�¹��(�ÑÇ�-Hook�Ñ���ReadFile{ýp�ÿb�Ëã�üô��;û����ý�:h���ÇÙ͹��­Ã���EgL�ã�;��QÜÓ���ó­Ãpn���(7�¾�@Oî�ùÓ�§��'�qÍ�{<��AI�v�_¾åý¯d{�hÎi�

QÜ­Ã�8�(TensorboardeÑÆ!��­ÃÇ��å×�ïñü/��;û�)(TensorboardïÆ�����/MITM-ôº«�;û�µevá9O® �-�pnáo���ÑÆh�QÜ­Ãò¿��E &��v�9n�ï���Ó���Zú�ï�³V�Û�ïü�í�v¹��Ù§Á�ÑÇ�&e�'����

222¡¡¡úúú®®®��� :ùQÜ�hú@¾½��h2¤ª½�Êöô°oöe�H,� :ù�(,¹��!��ö��hÀåå\�

4.4 ­­­ÃÃÃ���èèè;;;ûûû

������¦¦¦���***

­Ã�*'ýo}�ñ¦f`!��������¡�D��àd���(7é)(,¹sðÛL!��­Ã�ÙÍ ï§�­ÃÇ��7X(«�è;û�Îi�wS��1�­ÃÇ�� ��'��v��,¹­Ãsð(­Ã�Ç�-ïåî9(7Ф�­ÃpnÆ�å{<,¥J3.2��-Ïð�pn�è;û�¹� e�è�

222¡¡¡úúú®®®��� �(X(Îi�,¹¡�sðÛL­Ã�·�­Ã}�!��)(,¥J3.2��-ËÍ��è2¡¹ÕÛL�è�Àå��d�

4.5 ^ÆÆÆ---���:::ooo

������¦¦¦���**

T¦f`/^Æ-�:o��Í:�S4pnd���°AI�\�Ðú����f`���Â��ïå( l�ê«@åpn�ŵ�(,0­Ã!��vq�ô¤�*h@!��6��T¦f`�ÙÍ���y¹_©v�:�îM�ïým×pn�Ò;û�!��;û�ïå�ǧ6�*Â��(,0ÛLpn�Ò�°ùh@!��qÍ�

Î;ûî�e���ùT¦f`�;ûïå�:å�Í��1pn�Ò;ûïå©!�¾¦'E¦�M[27, 51]��2Ü`­;ûïå©T�!�àÕ6[[27]��3�è;û[51, 52]�üô!�«v�§6�M$Í;û� �!�­ÃD��j9���è;û�ïý�«;û��!���KÓ�(y�:o�«;û�ͧ�Û��Ñ�hÎi�

10

Page 11: flAI Lab fl hsðŁ1Àž®‰全的威胁风险矩阵.pdf · þþþ 8: œ”pnŁÒ— Ł;ß: þ[21]. pn Ł;ß/ ˆ˙ŁÒ›ˆpn˘—„ ÛL Ł e[22, 23, 24] ‡þ8@: (þˇ {ß¡-

þþþ 12: æþ��ùT¦f`��Ò:�[27]�óþ�Ü`­;û���[50].

wS���þ12@:�;û�ïå�Ç͵�*��*Â��(,0(�ÒpnÛL­Ã�Û�qÍ ¡h�¹Z��¯¦G<��°�è;û[27]�d����Â��Ф�!�­ÃÓ��hû�� צ_��_ïý�§�Ü`­;û�üô!�àÕ6[[27]��þ12@:�1c8Â��(ÑrZ¿­4)¡��¯¦0¡<��(costýp��E¯¦(Ýr�¿­4)hô�v�Â��ïåÙú�*Ü»c8¯¦��Ï(¢rZ¿­4) Û�qÍ!��6[[50]�

���ùùù���ÒÒÒ;;;ûûû���222¡¡¡úúú®®®��� ¾n!��ô°�p�<výe£��Ç�<�ô°��(�Æ�ÁjP6Ï*Â���ô°ý���(�1î��Á�¹Õù!�ô°�<û �Ïjð[51]����ùùùÜÜÜ`­­­;;;ûûû���222¡¡¡úúú®®®��� )(krum�Õ�Ç�d��Ü���Ï�ãÜ`­;û[50]�Ù͹Õ�e����ÀK!W�à�øÔc8­Ã�MN���6[�¦�

5 !!!���èèèrrr

5.1 !!!���---pppnnnbbb

������¦¦¦���*

îM������ÇÅ�>!���Ï­Ãpn�¹�ݤpn�h�6���°��vh��;û�Í6ïý)(!�b ú�Ë­Ãpn�wS��'è�QÜ+�yR��Bt+�­ÃÆþG(d�y�þ�áo�)(Ù�áo�;û�ïå�'�¦0Ø�ú­Ãpn[53]�v��:�þ�þ13@:�

Ù{;û�2¡��vÍ��we6µ��à�:���;û�2¡Kµ�

5.2 !!!������ööö;;;ûûû

������¦¦¦���**

°�ù!��;û�/�Ç(pnBb�î9å¾0;û�î���E�(èr6µ_���Ç͵�X�ù!��öô¥ÛLî9�;û¹������Í:hf`

11

Page 12: flAI Lab fl hsðŁ1Àž®‰全的威胁风险矩阵.pdf · þþþ 8: œ”pnŁÒ— Ł;ß: þ[21]. pn Ł;ß/ ˆ˙ŁÒ›ˆpn˘—„ ÛL Ł e[22, 23, 24] ‡þ8@: (þˇ {ß¡-

þþþ 13: ú�!��pnb :�[53].

þþþ 14: Ôyûl;û:�[11].

F¶ÝX�!��ö/åy�<����¨X(ÁØ��Ç�����¹�ïå·�!�B!Ó��pnÂpIáo�;û�ïå�Çî9!�-y��p*Ôy�pn��°�è�ý�/��!��ý1H�øÔ�pn�Ò�ô¥Íµ!��þïí��¦ë�F/Ù_��;û�ù«;û!�ôØ�§6CP�

PytorchIñ¦f`F¶���(pickle���¹�ÝX!�Ó��Âp�;û�)(øs��ïå(!�ÝXöLev�ã��(!�Í���öê¨gLv�}ä�æ��;û�_ïå�Çî9!��ö-y�^ÏC�CÍ��!�'ý)�[12]��Ò�è�ý��þ14@:�ú�[11]-�¹Õ�;û�ïå�Çûl93MÔyÂp-�13*Ôy���!�¾¦Î70%�M00.1%�;û��óØïå�v���Û6ã��¹�Ï0!�Âp�è�(y�ö;gL�

Ù{;û��v�7���:���6µ�ô¥á9!���v(f/L¢¨���F(��:o-ô¥H�(�Õ��U¾Æ�M!�-«î9�s.^ÏCáo�Íw�'�¨ºzô�

12

Page 13: flAI Lab fl hsðŁ1Àž®‰全的威胁风险矩阵.pdf · þþþ 8: œ”pnŁÒ— Ł;ß: þ[21]. pn Ł;ß/ ˆ˙ŁÒ›ˆpn˘—„ ÛL Ł e[22, 23, 24] ‡þ8@: (þˇ {ß¡-

þþþ 15: �ùInception !�[54] þÏ�{�KÕù�;û:��rÀþÏe��Imagenet [55].

6 !!!������(((

6.1 pppWWWùùù���;;;ûûû

������¦¦¦���***

ù�;û/�Í�ùAI !��(¯��;û¹�[56, 57, 58, 59, 60]�;û��Ç(c87,û º<¾åßÉ�v�jð�_ð;ûjð���AI !�ùû �;ûjð��7,�sù�7,Zú�ï��K�å�{!�:���þ15 @:�Inception!�[54] ùrÀ7,x ýÆn��K:�?ÌW�¬��F/( ®��ù�jðε��ù�7,x + ε «Inception �ï��K:��¾4��}6åñ¦^ÏQÜ:x�ãh�AI !�(Ñ���2�ê¨~vI���ßý�0��Û�(�ù�;ûí:ú�AI !�,«Í6X(�'��hÎi�

ù�;û�8�¾0$*î��1jðE¦��º<¾åßÉ�2ù�7,�c87,��K �ô�9nÙ$*y¹���ù�7,�¹Õïåú!:å���î��

arg minxε∈C

D(xε,x) + λ1L1(f(xε;w), t), (1)

v-xh:c87,�vcn�~:y�xεh:vù��ù�7,�t h:ù�;û�î�{+�f(·;w)h:�;û�ñ¦f`!��v-w ãh!�Âp�C h:xε ��á³�Ð�y�¦_�Ô�Ö<�ô:[0, 1]�D ãhÝ»ýp�(�¦Ïxε �x �î��_1ýpL1 (�¦Ïù�7,�úf(xε;w) �;ûî�t Kô�î���ùðî�ýp-� �àPïÇÖ�;��Î�ÑUú��Íù�;û�/�Ô�}Ò;û�ÑÒ;û�î�;û�^î�;ûI�þ16Ùú�SM8Á�ù�;û{��

}Ò;û�;û�w�«;û!�f(·;w) �Ó��CÍIøsáo�(}Ò;û�ïå�ǯ¦�MÕô¥��î�ýp(1) �0;û7,xε�F/(���;û:o-«;û!��Ó��CÍIáo�8ù;û���/ ïÁ��ÑÒ;û�v�/(àÕ·Ö«;û!��Ó��CÍ�ŵ���UBã;û7,xε�1�«;û!

13

Page 14: flAI Lab fl hsðŁ1Àž®‰全的威胁风险矩阵.pdf · þþþ 8: œ”pnŁÒ— Ł;ß: þ[21]. pn Ł;ß/ ˆ˙ŁÒ›ˆpn˘—„ ÛL Ł e[22, 23, 24] ‡þ8@: (þˇ {ß¡-

þþþ 16: ù�;û��,h¾��8Á{�

.

�f(·;w) �Ó��Âp*å�î�ýp(1) àÕô¥ù;ûjóxε Bü�F/;û��8ïå�ÇAPI¥ã�(�¹�·Ö!�f(·;w) ��K�ú���Ǻ8API¥ã�(·Öº8Ôùø<¦I�;û��ÇAPI�(¹�¿î!�v�nÔÞÓ�î9ù���e7,å¾0;û�î��ÑÒ;û�*�Í��Ä÷��s/(;û���ŵ���ÛLAPI�(�!p�îMÑÒ;û�;�ZÕÁû;û�¯¦0¡��:�"I[62, 63, 64, 65, 66, 67, 68, 69, 70, 71]�v-Áû;û�Ç�úÿã!�vùÿã!�ÛL}Ò;û�0;û7,��)(å;û7,ô¥;ûÑÒ!�[69]�îMú�Áû;û�¹Õ(^î�;ûý¾0���;ûH��F/(î�;û;û����N�¯¦0¡�0¡ÑÒ!�f(·;w) s�xε �¯¦v)(¯¦�MÕBã;û7,[62, 72]��ö6���ÕI�¯¦0¡�8Ã(��UØH�0¡ÑÒ!��¯¦��:�"_/�Ñ�vÔ���ÑÒ;û¹Õ[68, 73]��:� �8�Ó���H�åÆ�����ù�;ûjð�8Ñ�(ïLß¹LeÐØ�"H��îMò���ÇÑÒ;û�¹�;4þÏ�{[62, 71]�º8Æ+[63]�Æ�ÀK[64]I�*û¡���CIFAR-10 [74] þÏ�{û¡��ùDensenet�{!��sG44!�API�(sï�°^î�;û�sG787!�API�(sï�°î�;û[71]�

ÑÒ;ûÇ�- ��«;û!��­Ãpn�!�Ó��!�ÂpIáo�ê�ÇAPI�(�°;û�î��ÑÒ;û�����(:oô �ô�F/îM���ÑÒ;û�8��^8'�API�(Ï�y+/�ù B�AI!������ùº8Ôù!���ÇÑÒ;û¹Õ���< ïÁ�ù�jð�8���!�API�([63]��d'Ï�API�(ïå�¹��««;û!�2¡�e�ÑÒ;û��ví¹/�UÓ�H�åÆIMN;û�����API�(Ï����¿î!pÎîM��!MN0�~!å��sïù���AIûߧ�^8'��h�£�

îMù�;û;�Æ-(¡�:ÆÉ�ß[75]�F/_��å\�ÕùíóûßÛLù�;û[76, 77, 78, 79, 80]��þ17@:�;û��Çùc8�íóGµû º3�×

14

Page 15: flAI Lab fl hsðŁ1Àž®‰全的威胁风险矩阵.pdf · þþþ 8: œ”pnŁÒ— Ł;ß: þ[21]. pn Ł;ß/ ˆ˙ŁÒ›ˆpn˘—„ ÛL Ł e[22, 23, 24] ‡þ8@: (þˇ {ß¡-

þþþ 17: ó��Wlb���;û[61]

.

0�®�jð��íóÆ+ûß�vÆ+:�h ���¹[61]�íóÆ+ûß(zý¶E¾�-�0��Û��(�Ô�zýóÍI�;û�ïý(zý¾�- eù�p¨�!Wå��(7��ä«�ï�Æ+:vÖ�ä�&e�'��h�£�{<��ó�;û_ï�:î�;û�^î�;û�M��Bù�íó«Æ+:y���¹����ê�Bù�íó«�ï�Æ+�vÖ�ó��1�ðó�1� B'��°î�;û�ô ð¾�d��íóûß�!�Ê���Í\��Ô� B�Ù_Ù;û�&e��'�ð¾�îM�ó�;û;�Æ-(}Ò;û6µ��U�ù�ó����Ç�Le0¾��§T!WÌ��ö���ù�ó�/�\���v¹��

d�ÆÉ�ó��ù�;û��ú��,[81]�¨Pûß[82]�:�f`[83]�À"[58]I�/�AIûß_X(«ù�7,;û�ïý�

222¡¡¡úúú®®®��� bùù�;û��Á�ïL�2¡ú®�ì�

• Î!�Ó�Bb��qÍ!�ù��Ò'�à �¾¡úô �Ò�!�Ó��

• ÎpnBb��­Ãpn-@�+��Ò�^�Òy��(­Ãö=Ïß��Ò'y��

• έÃ:6��ù�7,ú°��à��Çù�­Ã[56] �0ô �Ò�!��

• (!�èr¯�� ù�ÀK�þÏ��������)�ÿ�ØbIIe¤�µ¡ù�;û�

• P6API¿î���2bú�API�(�ÑÒ;û�

15

Page 16: flAI Lab fl hsðŁ1Àž®‰全的威胁风险矩阵.pdf · þþþ 8: œ”pnŁÒ— Ł;ß: þ[21]. pn Ł;ß/ ˆ˙ŁÒ›ˆpn˘—„ ÛL Ł e[22, 23, 24] ‡þ8@: (þˇ {ß¡-

6.2 iii���ùùù���;;;ûûû

������¦¦¦���*

îM'è�ù�;ûý/Ñ�(pWzô��sô¥á9pWzô�X¨�ùa��X¨�þGÏ <I¾0;û�î��F/(°��(:o-�(���8�Ç �h��ø:I�i��L-�ùaÛLkÏ·Ö0vpWzô-ù��h¾�6��ùpWzô-�ùaÛL$+��º8Æ+ûß�ÇDÏ4�öUIº8þÏvÛLÆ+�ê¨~v�Ç�Í �h·ÖhôiS�¶�I�(ÙÍŵ��1�kÏ�Ï�Ç�ù;û��attacker��/ ï¾��;û��8¡CPô¥ùkÏ��pWzô�ùaÛLù�á9�i�;û�v�Uô¥ô9i��L-�ùa��°ù�;û�H��

þþþ 18: ¤��:L4¸;û�þGe���.[84].

îMx��i�;ûZÕ/�Ç(��iS4y6�;û4¸�å¾0;û�î�[84, 85, 86, 87, 88, 89, 90, 91, 92]�Ô��.[84] �v��ùiSÆ+ûß�i�;û��þ18 @:��Ç(¤��:L�Stop Sign4y��Ñ};û4¸����{h�Stop-Sign �ï�Æ+:P��×�ùê¨~vûß&e�'��h�£��.[85] �v��ùº8Æ+ûß�i�;û��Ç��y6�<\F4¸e:�º8Æ+ûß�:º8�2�º8/ØIûß&e�'��h�£�

1�àÕ(i�zô-ùjðú!vÛLíã���i�;û�8�ÇÁûf`e���s�H�úÿã!�v(pWzô-�ùÿã!���;ûjð�v!�;ûjðSpúe40wSiSÛLi�zô;û�àdi�;û�Ê0;ûjð�pWzô0i�z�lbÇ��×P�Sp:¾¦�î��pWzô�þÏSpÇ�-�8�X(���r1��v�(DÏ:ÍDÍ·ÖÇ�-_�X( ��i�¯��Igaö�ÍDÒ¦�ÍDÝ»IØ����(pWzô�ù�jð�SpÍ·Ö��ù�jðKôX(���Oî�Û�üôpWzôý��;û�ù�jð\(0i�zô�ö��8;ûH�P�i�;û�͹_/¢"�UÐØù�jðùi�¯�Ø���Ò'�îMÔ�8(�i�;û¹Õ�8�QSp: ïSp<�Non-printability Score, NPS[85] ¦_��� ��Øb�Expectation over Transformation, EOT[86]�v-NPS �H¡�ú�ÄSp:ïSp�Ï �ôÆ�S�6�(Bã;ûjð�Ç�-¦_;ûjð=ïý

16

Page 17: flAI Lab fl hsðŁ1Àž®‰全的威胁风险矩阵.pdf · þþþ 8: œ”pnŁÒ— Ł;ß: þ[21]. pn Ł;ß/ ˆ˙ŁÒ›ˆpn˘—„ ÛL Ł e[22, 23, 24] ‡þ8@: (þˇ {ß¡-

þþþ 19: !��Ö���:�þ[93].

�=eÆ�S�ådMNSp:&e�¾¦_1��EOT �Qi�zô-ïýX(�Øb��ËlØb�Ý»Øb�Ò¦ØbI�Û�(pWzô-Bãù�jð�ö���ù�jðùÙ�Ø�w�����Ò'�

222¡¡¡úúú®®®����ùi�;û�2¡ïåÎpn·Ö�jðÀK�ù��Ò!�Ià*¹beK�îMi�;ûù��¯�Ø��ú! /^8H�(pn·Ö6µïå�ÇÆà* �Ò¦�ÍD¯���þGÛLT�$­�å2¡i�;û�jðÀKî��(�e!�ÛL$+KM��Hù�eþÏÛLÀK�$­�eþÏ/&«ÛLù�á9�vú,��/)(ù�jðþÏ�rÀþÏ(pn���î��ù��Ò!�è(©!�,«w�ù;ûjð��Ò'��8ïå�Çù�­Ã�¹��°�

6.3 !!!������ÖÖÖ

������¦¦¦���**

�@ñ¦f`�QÜ ñ�­Ã�,_ ­�� �'ý}�!�òÏ�:lø/�v:��Í�8Ã"§�:�ݤ!���öÈ��ý�>Ù'��øs:����!�èr(�ï��API�>Ù(7��°�:hf`s ¡��MLaaS�6���Ñ�vh��;û�ïå)(�!�(APIåâQÜ��úÓ��Û��Ö!���ý�wS���þ19 @:�

[94]�!Ðúïå�ÇAPI�Ö!��Ù�/�å;û¹���vå\�6��[94] ý�Ö�!����:��ãÙ*î��[95]Ðú��Íý�Öe'!��¹Õ�d��[93]æÆ¢v�!��Ö�aö�à qÍ�(:�f`�°�'Ä!QÜ��Ö�[96])(ù�;û>WÏ���Ö@��åâ!p�åN�,���Ö��'å sð�!��

222¡¡¡úúú®®®���P6(7åâ!p�ê�úÓ���Ï!��úÆ��

17

Page 18: flAI Lab fl hsðŁ1Àž®‰全的威胁风险矩阵.pdf · þþþ 8: œ”pnŁÒ— Ł;ß: þ[21]. pn Ł;ß/ ˆ˙ŁÒ›ˆpn˘—„ ÛL Ł e[22, 23, 24] ‡þ8@: (þˇ {ß¡-

þþþ 20: ^ÏQÜÐLö��Xzô:�þ[97]. ;û�ïå�Ç�X¢ú��·�û�0@û�ý��Û��Çî9!�Âpe;û!�.

6.4 GPU/CPU¢¢¢úúú444OOO

������¦¦¦���*

îM�ùAIûß;û�p/ú�pn��Õ¹b��XBb�7_X(���hî��;û�ïå�ùy�GPU� ú¾n��X¢ú���Nvidiaþba/��CUDA�X!���Ç�X¢ú�Ö^ÏQÜ-�Ð�^ÏCáo��QÜ!���Kú°Oî�(Github-�[97]yî1Ð0�Ù7�;û¹��wS���*x��¡�:ÆÉ�(����@þÏ�e^ÏQÜÛL�{KMÛL���Í\�:� ë���¦��vºX��þÏ�!� }0DRAM-�1����Ç���ýî9þÏ�!��,�'�Ù�üô$�ý�(h@�X�§��þ20��X!�Ó��

>6���;û�ý�¢ú�MÙþÏ��X�£Hvïå�ÖQÜ!���æ�ÖÇ�!�S-�Âpï嫾n§6�£H^ÏQÜ1ïå«;û�9 �:àH�QÜ�ó/�èQÜ[98, 99]� �%Í��Á�

îMå�ß��vØ���:é��6µ��à���2¡¹Õ�

7 !!!���¶¶¶���

7.1 åååâââ���¶¶¶������ÖÖÖ

������¦¦¦���*

!�¶��ø<'/³�ù�;ûÁû'�Í�à [100]�å;û�ý·Ö«;û!��Ó�áo���'� v�í;û�����

=¡îM�ù¶��Ö��vM���Ë�îM_òϧ����øs��v������;û�ïå)(Ç(åâQÜ�ú�¹Õ��Kú&ReLUÀ;ýp�QÜÓ�[101]�ÙÍ«;û�Îi/1ReLUQÜ�'(üô��wS��ReLU/�µ¿'��v �Gµ�¹Lù��ReLU(À;�� �Kô�b��e<�àd�;û�ïå�Bn�QÜ�Ó���þ21@:�óþ��Ó�h��)(æþU:����;û�ïå�Çå¹ÕÆn00¡úQÜÐ�B^ÏC�*p�

îMå�ß��vØ���:é��6µ��à���2¡¹Õ�

18

Page 19: flAI Lab fl hsðŁ1Àž®‰全的威胁风险矩阵.pdf · þþþ 8: œ”pnŁÒ— Ł;ß: þ[21]. pn Ł;ß/ ˆ˙ŁÒ›ˆpn˘—„ ÛL Ł e[22, 23, 24] ‡þ8@: (þˇ {ß¡-

þþþ 21: åâ�¶��Ö[101].

þþþ 22: §áS¶��ÖÓ�[102].

7.2 §§§áááSSS¶¶¶������ÖÖÖ

������¦¦¦���*

§áS;û/��)(���öô���5Á��I¹�·Öáo��^�Ç´�4ãÕ��Õ-��º'1¹ÛL�;û�[102] )(�§áS;û��ó�ê�;û��׳�(��ð;:��(ø��ñ¦f`F¶�;û�sï(!���!¨�-�Çѧ�X�cacheb úy�}ä�gLz��Û�¨üúQÜ�Ó��/�Âp�ÙÍ;ûýÖ��Ø�Æn¦��þ22@:�

;û�ïå�ǧáS¶��Ö!��Ó�áo�Û�(pnÆ­ÃÍ�ú�¶�å�Öî�!���ý�%͵¯���èr����Õ��Á�åƧC�:�� �%Í�F�_1�

îMå�ß��vØ���:é��6µ��à���2¡¹Õ�

8 ÓÓÓ���qqqÍÍÍ

8.1 !!!���ïïï$$$

AI;û�ô¥�qÍ�1/��!�§��ï��K�ù���Ò��èI;û��Èî��ý/�Ç���^���¹�ïü!��

19

Page 20: flAI Lab fl hsðŁ1Àž®‰全的威胁风险矩阵.pdf · þþþ 8: œ”pnŁÒ— Ł;ß: þ[21]. pn Ł;ß/ ˆ˙ŁÒ›ˆpn˘—„ ÛL Ł e[22, 23, 24] ‡þ8@: (þˇ {ß¡-

þþþ 23: î�ÀKQÜï$Ó�:�þ[103].

(^���;û-�;û���÷ÆQÜ�$­��vù³V�ná¦MN�CóZú�ï�$³����(º8Æ+:o-�;û�ïå©�,ø<��ùº8�9M1%[104]�(î�ÀK-�;û�ïå©ùa «ÀKúe[105]���«Æ+:æ�*iS�(¨Pûß-�;û�ïå©ûßÙú�h øs�¨P[106]�

îM�ù� �û¡�ñ¦ûß�ýý{�~0º:$³�ûß$³ �ô�ù�7,[107, 108, 109, 92]��°^��;û�Ù�ù�7,��7,�àNà��tý�QÜ��úÑ�%gØ��ù�7,�X(Ùûß��(� �è'�ðp��hÎi�d��Ù�;ûw���Áûý��(þ23��P-�;ûMask R-CNN!����ù�7,_ýÁû;ûv�5*!��ù�7,��!� ýÆ+úþG-�5��.ØIiS�Ø�Ùú�ï��KÓ�����

^��;ûê�B!�ù7,Zú�ï$­����;û�Û�e�B!�Zúy���ï$­��þ24@:�(ImageNet�­Ã}��*!��ýcn�KrÀ7,�6��;û��Ç(þG½ ®��p¨�ïå��*QÜ�ú����ïÓ��Bonnet�[110]�(º8Æ+-�;û�ïå�Ç{<�¹���ù�p¨���!�$­¿¢:ØCP�¡�X� �%Í��hq:�îM�ôwq³�ÑÒ��;û_(�Û�v-[111]�

20

Page 21: flAI Lab fl hsðŁ1Àž®‰全的威胁风险矩阵.pdf · þþþ 8: œ”pnŁÒ— Ł;ß: þ[21]. pn Ł;ß/ ˆ˙ŁÒ›ˆpn˘—„ ÛL Ł e[22, 23, 24] ‡þ8@: (þˇ {ß¡-

þþþ 24: ��ù�;û[110].

8.2 áááoooÄÄÄ222

��ô!�ï$/ùAIûß;û�£HáoÄ21/ùAIûß��Ö�ÙÍ;ûïåüô!��ý«,¹Ø�� �(7�Á�Ä��løáoD§«�ÖIè'q³�

�*o}�!������'Ï���­Ã�Ø(Ï�pn/����:o�����­Ã}�!�èr(�ï��>API¥ã�(7�(�(7ïå9n'Ï��eåâ��0'Ï!��ú�Î�ùdûßú!���Ø�v�ý�Û�MNF(!��Þ��vJÏv6Ê�

(;��Ñ��ß�(7�pn/�:Í��D����Ä2�� �%Í��Áq:�F�÷<A1[112]�1�:hf`/pnq¨���v��8Ç(���¹ÕS4pnd��T���ôpn�vÝÁpn�h[113]�F/(ÙÍ:o��­Ã��6ïý�Öpnï��¹�d����yî��å!��¹�¤Ø�;û�_ïýú�@�!���b ú­Ãpn��Ñ�Áq:�

21

Page 22: flAI Lab fl hsðŁ1Àž®‰全的威胁风险矩阵.pdf · þþþ 8: œ”pnŁÒ— Ł;ß: þ[21]. pn Ł;ß/ ˆ˙ŁÒ›ˆpn˘—„ ÛL Ł e[22, 23, 24] ‡þ8@: (þˇ {ß¡-

9 ;;;ÓÓÓ

îM�AI �/(º8/Ø�º8�2�íóÆ+�:hûÑI��:oý�0��Û��(�AIûß��h'î�_�w��L�e���sè��ùAI!��v�;ûïåÙ(7&eè'��hÎi����;û�ïý�Çy6�;û4¸e:�º8Æ+ûß�Î�&e�}"§_1��ùAI!���hî��,¥JÎ;û�ÆÒúÑ�ùAI!�t*�}h�-�*¯�ïýX(��hÎiÛLR³t��vÙúø��2¡ú®�,¥J Å�+��Voö;û�Dockerv�¿î;û���þ;ûI�(�;û�_�+�ùAI!��ù�;û��è;ûI°�;û¹������á;û�ÆÒ��;û¹Õ©�AI�Ñ��/ÐôºX�ãh�}h���AIûßÎi¹�2¡¹Õ�:AIûß��hèr��(=0Ð�Å���/Ý��

10 HHHCCC@@@���ÎÎÎiiiððð���

,¥JHCR~¯lø@�å¥JÅ:�,'ú®Â��û� �(:OwS���ú®�ŵ��Åê9n¥J-�ûUáoÇÖL¨�~¯lø� ùûUàÇ(,¥J-áo�üô�_1�#�

10.1 \\\���UUUMMM

~~~¯AI Lab

Baoyuan Wu ([email protected])Yanbo Fan ([email protected])Yong Zhang ([email protected])Yiming Li ([email protected])Zhifeng Li ([email protected])Wei Liu ([email protected])

~~~¯���hhhsssðððèèè111ÀÀÀ������¤¤¤

vikingli ([email protected])jifengzhu ([email protected])allenszchen ([email protected])ucasjhxu ([email protected])dylandi ([email protected])xunsu ([email protected])

22

Page 23: flAI Lab fl hsðŁ1Àž®‰全的威胁风险矩阵.pdf · þþþ 8: œ”pnŁÒ— Ł;ß: þ[21]. pn Ł;ß/ ˆ˙ŁÒ›ˆpn˘—„ ÛL Ł e[22, 23, 24] ‡þ8@: (þˇ {ß¡-

10.2 sss������ììì

~~~¯AI lab

~¯AI Lab /~¯���§AI��¤��2016t4�(ñ3�Ë��©~¯0Ì�(:o�'pn�¡��Ê�AºM¹b���ï/�AI Lab˳*e��>�\�ô�� ­ÐGAI��ã�³V�� ����Make AI Everywhere��?oÈe�

þþþ 25: ~¯AI Lab.

~~~¯���hhhsssðððèèè111ÀÀÀ������¤¤¤

~¯�hsðè1À��¤�è���§ü�;û�vÊAI�h�/�v�å;Ã2���M¿�/Êlø�¡�\(Îi��¤~¯�¡Ê(7�h�

þþþ 26: ~¯�hsðè1À��¤.

23

Page 24: flAI Lab fl hsðŁ1Àž®‰全的威胁风险矩阵.pdf · þþþ 8: œ”pnŁÒ— Ł;ß: þ[21]. pn Ł;ß/ ˆ˙ŁÒ›ˆpn˘—„ ÛL Ł e[22, 23, 24] ‡þ8@: (þˇ {ß¡-

ÂÂÂ������...

[1] Q. Xiao, K. Li, D. Zhang, and W. Xu, “Security risks in deep learning implementa-tions,” in IEEE S&P Workshop, 2018.

[2] https://www.cvedetails.com/vulnerability-list/vendor id-1224/product id-53738/Google-Tensorflow.html.

[3] https://www.securityweek.com/serious-vulnerabilities-patched-opencv-computer-vision-library.

[4] https://www.secpod.com/blog/opencv-buffer-overflow-vulnerabilities-jan-2020/.

[5] https://www.microsoft.com/security/blog/2020/06/10/misconfigured-kubeflow-workloads-are-a-security-risk.

[6] https://security.tencent.com/index.php/blog/msg/130.

[7] https://www.kubeflow.org/docs/notebooks/setup/.

[8] Z. Liu, J. Ye, X. Hu, H. Li, X. Li, and Y. Hu, “Sequence triggered hardware trojan inneural network accelerator,” in VTS, 2020.

[9] https://www.microsoft.com/security/blog/2020/04/02/attack-matrix-kubernetes/.

[10] J. Clements and Y. Lao, “Hardware trojan attacks on neural networks,” arXiv preprintarXiv:1806.05768, 2018.

[11] A. S. Rakin, Z. He, and D. Fan, “Bit-flip attack: Crushing neural network with pro-gressive bit search,” in ICCV, 2019.

[12] nEINEI, “�Ñ��^ÏQÜ��Ç��!��öeÍ�!��è,” in XFocus Infor-mation Security Conference, 2020.

[13] C. Zhu, W. R. Huang, A. Shafahi, H. Li, G. Taylor, C. Studer, and T. Goldstein,“Transferable clean-label poisoning attacks on deep neural nets,” arXiv preprint arX-iv:1905.05897, 2019.

[14] A. Shafahi, W. R. Huang, M. Najibi, O. Suciu, C. Studer, T. Dumitras, and T. Gold-stein, “Poison frogs! targeted clean-label poisoning attacks on neural networks,” inNeurIPS, 2018.

[15] H. Xiao, B. Biggio, B. Nelson, H. Xiao, C. Eckert, and F. Roli, “Support vectormachines under adversarial label contamination,” Neurocomputing, vol. 160, pp. 53–62, 2015.

[16] B. Biggio, B. Nelson, and P. Laskov, “Poisoning attacks against support vector ma-chines,” arXiv preprint arXiv:1206.6389, 2012.

24

Page 25: flAI Lab fl hsðŁ1Àž®‰全的威胁风险矩阵.pdf · þþþ 8: œ”pnŁÒ— Ł;ß: þ[21]. pn Ł;ß/ ˆ˙ŁÒ›ˆpn˘—„ ÛL Ł e[22, 23, 24] ‡þ8@: (þˇ {ß¡-

[17] S. Mei and X. Zhu, “Using machine teaching to identify optimal training-set attackson machine learners.” in AAAI, 2015.

[18] J. Feng, Q.-Z. Cai, and Z.-H. Zhou, “Learning to confuse: Generating training timeadversarial data with auto-encoder,” in NeurIPS, 2019.

[19] H. Chacon, S. Silva, and P. Rad, “Deep learning poison data attack detection,” inICTAI, 2019.

[20] Y. Chen, Y. Mao, H. Liang, S. Yu, Y. Wei, and S. Leng, “Data poison detectionschemes for distributed machine learning,” IEEE Access, vol. 8, pp. 7442–7454, 2019.

[21] Y. Li, B. Wu, Y. Jiang, Z. Li, and S.-T. Xia, “Backdoor learning: A survey,” arXivpreprint arXiv:2007.08745, 2020.

[22] T. Gu, K. Liu, B. Dolan-Gavitt, and S. Garg, “Badnets: Evaluating backdooringattacks on deep neural networks,” IEEE Access, vol. 7, pp. 47 230–47 244, 2019.

[23] A. Saha, A. Subramanya, and H. Pirsiavash, “Hidden trigger backdoor attacks,” inAAAI, 2020.

[24] S. Zhao, X. Ma, X. Zheng, J. Bailey, J. Chen, and Y.-G. Jiang, “Clean-label backdoorattacks on video recognition models,” in CVPR, 2020.

[25] X. Chen, C. Liu, B. Li, K. Lu, and D. Song, “Targeted backdoor attacks on deeplearning systems using data poisoning,” arXiv preprint arXiv:1712.05526, 2017.

[26] J. Dai, C. Chen, and Y. Li, “A backdoor attack against lstm-based text classificationsystems,” IEEE Access, vol. 7, pp. 138 872–138 878, 2019.

[27] E. Bagdasaryan, A. Veit, Y. Hua, D. Estrin, and V. Shmatikov, “How to backdoorfederated learning,” in AISTATS, 2020.

[28] K. Kurita, P. Michel, and G. Neubig, “Weight poisoning attacks on pre-trained mod-els,” in ACL, 2020.

[29] B. Wang, X. Cao, N. Z. Gong, et al., “On certifying robustness against backdoorattacks via randomized smoothing,” in CVPR Workshop, 2020.

[30] M. Weber, X. Xu, B. Karlas, C. Zhang, and B. Li, “Rab: Provable robustness againstbackdoor attacks,” arXiv preprint arXiv:2003.08904, 2020.

[31] Y. Liu, Y. Xie, and A. Srivastava, “Neural trojans,” in ICCD, 2017.

[32] B. G. Doan, E. Abbasnejad, and D. C. Ranasinghe, “Februus: Input purificationdefense against trojan attacks on deep neural network systems,” in arXiv: 1908.03369,2019.

25

Page 26: flAI Lab fl hsðŁ1Àž®‰全的威胁风险矩阵.pdf · þþþ 8: œ”pnŁÒ— Ł;ß: þ[21]. pn Ł;ß/ ˆ˙ŁÒ›ˆpn˘—„ ÛL Ł e[22, 23, 24] ‡þ8@: (þˇ {ß¡-

[33] Y. Li, T. Zhai, B. Wu, Y. Jiang, Z. Li, and S. Xia, “Rethinking the trigger of backdoorattack,” arXiv preprint arXiv:2004.04692, 2020.

[34] K. Liu, B. Dolan-Gavitt, and S. Garg, “Fine-pruning: Defending against backdooringattacks on deep neural networks,” in RAID, 2018.

[35] P. Zhao, P.-Y. Chen, P. Das, K. N. Ramamurthy, and X. Lin, “Bridging mode connec-tivity in loss landscapes and adversarial robustness,” in ICLR, 2020.

[36] B. Wang, Y. Yao, S. Shan, H. Li, B. Viswanath, H. Zheng, and B. Y. Zhao, “Neuralcleanse: Identifying and mitigating backdoor attacks in neural networks,” in IEEES&P, 2019.

[37] X. Qiao, Y. Yang, and H. Li, “Defending neural backdoors via generative distributionmodeling,” in NeurIPS, 2019.

[38] H. Chen, C. Fu, J. Zhao, and F. Koushanfar, “Deepinspect: A black-box trojan detec-tion and mitigation framework for deep neural networks.” in IJCAI, 2019.

[39] S. Kolouri, A. Saha, H. Pirsiavash, and H. Hoffmann, “Universal litmus patterns:Revealing backdoor attacks in cnns,” in CVPR, 2020.

[40] S. Huang, W. Peng, Z. Jia, and Z. Tu, “One-pixel signature: Characterizing cnn modelsfor backdoor detection,” in ECCV, 2020.

[41] R. Wang, G. Zhang, S. Liu, P.-Y. Chen, J. Xiong, and M. Wang, “Practical detectionof trojan neural networks: Data-limited and data-free cases,” in ECCV, 2020.

[42] B. Tran, J. Li, and A. Madry, “Spectral signatures in backdoor attacks,” in NeurIPS,2018.

[43] B. Chen, W. Carvalho, N. Baracaldo, H. Ludwig, B. Edwards, T. Lee, I. Molloy,and B. Srivastava, “Detecting backdoor attacks on deep neural networks by activationclustering,” in AAAI Workshop, 2018.

[44] Y. Gao, C. Xu, D. Wang, S. Chen, D. C. Ranasinghe, and S. Nepal, “Strip: A defenceagainst trojan attacks on deep neural networks,” in ACSAC, 2019.

[45] M. Du, R. Jia, and D. Song, “Robust anomaly detection and backdoor attack detectionvia differential privacy,” in ICLR, 2020.

[46] S. Hong, V. Chandrasekaran, Y. Kaya, T. Dumitras, and N. Papernot, “On the effec-tiveness of mitigating data poisoning attacks with gradient shaping,” arXiv preprintarXiv:2002.11497, 2020.

[47] S. B. Venkatakrishnan, S. Gupta, H. Mao, M. Alizadeh, et al., “Learning generalizabledevice placement algorithms for distributed machine learning,” in NeurIPS, 2019.

26

Page 27: flAI Lab fl hsðŁ1Àž®‰全的威胁风险矩阵.pdf · þþþ 8: œ”pnŁÒ— Ł;ß: þ[21]. pn Ł;ß/ ˆ˙ŁÒ›ˆpn˘—„ ÛL Ł e[22, 23, 24] ‡þ8@: (þˇ {ß¡-

[48] L. Zhu, Z. Liu, and S. Han, “Deep leakage from gradients,” in NeurIPS, 2019.

[49] K. Grosse, T. A. Trost, M. Mosbach, M. Backes, and D. Klakow, “Adversarialinitialization–when your network performs the way i want,” arXiv preprint arX-iv:1902.03020, 2019.

[50] P. Blanchard, R. Guerraoui, J. Stainer, et al., “Machine learning with adversaries:Byzantine tolerant gradient descent,” in NeurIPS, 2017.

[51] Z. Sun, P. Kairouz, A. T. Suresh, and H. B. McMahan, “Can you really backdoorfederated learning?” arXiv preprint arXiv:1911.07963, 2019.

[52] C. Xie, K. Huang, P.-Y. Chen, and B. Li, “Dba: Distributed backdoor attacks againstfederated learning,” in ICLR, 2019.

[53] H. Yin, P. Molchanov, J. M. Alvarez, Z. Li, A. Mallya, D. Hoiem, N. K. Jha, andJ. Kautz, “Dreaming to distill: Data-free knowledge transfer via deepinversion,” inCVPR, 2020.

[54] C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna, “Rethinking the inceptionarchitecture for computer vision,” in CVPR, 2016.

[55] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei, “Imagenet: A large-scalehierarchical image database,” in CVPR, 2009.

[56] I. J. Goodfellow, J. Shlens, and C. Szegedy, “Explaining and harnessing adversarialexamples,” STAT, vol. 1050, p. 20, 2015.

[57] Y. Fan, B. Wu, T. Li, Y. Zhang, M. Li, Z. Li, and Y. Yang, “Sparse adversarial attackvia perturbation factorization,” in ECCV, 2020.

[58] J. Bai, B. Chen, Y. Li, D. Wu, W. Guo, S.-t. Xia, and E.-h. Yang, “Targeted attackfor deep hashing based retrieval,” ECCV, 2020.

[59] Y. Xu, B. Wu, F. Shen, Y. Fan, Y. Zhang, H. T. Shen, and W. Liu, “Exact adversarialattack to image captioning via structured output learning with latent variables,” inCVPR, 2019.

[60] X. Chen, X. Yan, F. Zheng, Y. Jiang, S.-T. Xia, Y. Zhao, and R. Ji, “One-shotadversarial attacks on visual tracking with dual attention,” in CVPR, 2020.

[61] N. Carlini and D. Wagner, “Audio adversarial examples: Targeted attacks on speech-to-text,” in IEEE S&P Workshop, 2018.

[62] Y. Guo, Z. Yan, and C. Zhang, “Subspace attack: Exploiting promising subspaces forquery-efficient black-box attacks,” in NeurIPS, 2019.

27

Page 28: flAI Lab fl hsðŁ1Àž®‰全的威胁风险矩阵.pdf · þþþ 8: œ”pnŁÒ— Ł;ß: þ[21]. pn Ł;ß/ ˆ˙ŁÒ›ˆpn˘—„ ÛL Ł e[22, 23, 24] ‡þ8@: (þˇ {ß¡-

[63] Y. Dong, H. Su, B. Wu, Z. Li, W. Liu, T. Zhang, and J. Zhu, “Efficient decision-basedblack-box adversarial attacks on face recognition,” in CVPR, 2019.

[64] Z. Wei, J. Chen, X. Wei, L. Jiang, T.-S. Chua, F. Zhou, and Y.-G. Jiang, “Heuristicblack-box adversarial attacks on video recognition models.” in AAAI, 2020.

[65] B. Ru, A. Cobb, A. Blaas, and Y. Gal, “Bayesopt adversarial attack,” in ICLR, 2020.

[66] L. Meunier, J. Atif, and O. Teytaud, “Yet another but more efficient black-box adver-sarial attack: tiling and evolution strategies,” arXiv preprint arXiv:1910.02244, 2019.

[67] P. Zhao, S. Liu, P.-Y. Chen, N. Hoang, K. Xu, B. Kailkhura, and X. Lin, “On thedesign of black-box adversarial examples by leveraging gradient-free optimization andoperator splitting method,” in ICCV, 2019.

[68] A. Al-Dujaili and U.-M. O’Reilly, “Sign bits are all you need for black-box attacks,”in ICLR, 2019.

[69] N. Papernot, P. McDaniel, I. Goodfellow, S. Jha, Z. B. Celik, and A. Swami, “Practicalblack-box attacks against machine learning,” in ASIACCS, 2017.

[70] Z. Huang and T. Zhang, “Black-box adversarial attack with transferable model-basedembedding,” arXiv preprint arXiv:1911.07140, 2019.

[71] Y. Feng, B. Wu, Y. Fan, Z. Li, and S. Xia, “Efficient black-box adversarial attack guid-ed by the distribution of adversarial perturbations,” arXiv preprint arXiv:2006.08538,2020.

[72] P.-Y. Chen, H. Zhang, Y. Sharma, J. Yi, and C.-J. Hsieh, “Zoo: Zeroth order opti-mization based black-box attacks to deep neural networks without training substitutemodels,” in Proceedings of the 10th ACM Workshop on Artificial Intelligence and Se-curity, 2017, pp. 15–26.

[73] M. Andriushchenko, F. Croce, N. Flammarion, and M. Hein, “Square attack: a query-efficient black-box adversarial attack via random search,” ECCV, 2020.

[74] A. Krizhevsky, G. Hinton, et al., “Learning multiple layers of features from tiny im-ages,” 2009.

[75] C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. J. Goodfellow, andR. Fergus, “Intriguing properties of neural networks,” in ICLR, 2014.

[76] Y. Gong and C. Poellabauer, “Crafting adversarial examples for speech paralinguisticsapplications,” arXiv preprint arXiv:1711.03280, 2017.

[77] C. Kereliuk, B. L. Sturm, and J. Larsen, “Deep learning and music adversaries,” IEEETransactions on Multimedia, vol. 17, no. 11, pp. 2059–2071, 2015.

28

Page 29: flAI Lab fl hsðŁ1Àž®‰全的威胁风险矩阵.pdf · þþþ 8: œ”pnŁÒ— Ł;ß: þ[21]. pn Ł;ß/ ˆ˙ŁÒ›ˆpn˘—„ ÛL Ł e[22, 23, 24] ‡þ8@: (þˇ {ß¡-

[78] M. Cisse, Y. Adi, N. Neverova, and J. Keshet, “Houdini: Fooling deep structuredprediction models,” arXiv preprint arXiv:1707.05373, 2017.

[79] N. Carlini, P. Mishra, T. Vaidya, Y. Zhang, M. Sherr, C. Shields, D. Wagner, andW. Zhou, “Hidden voice commands,” in USENIX, 2016.

[80] G. Zhang, C. Yan, X. Ji, T. Zhang, T. Zhang, and W. Xu, “Dolphinattack: Inaudiblevoice commands,” in CCS, 2017.

[81] J. Li, S. Ji, T. Du, B. Li, and T. Wang, “Textbugger: Generating adversarial textagainst real-world applications,” arXiv preprint arXiv:1812.05271, 2018.

[82] W. Fan, T. Derr, X. Zhao, Y. Ma, H. Liu, J. Wang, J. Tang, and Q. Li, “Attackingblack-box recommendations via copying cross-domain user profiles,” arXiv preprintarXiv:2005.08147, 2020.

[83] S. Huang, N. Papernot, I. Goodfellow, Y. Duan, and P. Abbeel, “Adversarial attackson neural network policies,” arXiv preprint arXiv:1702.02284, 2017.

[84] K. Eykholt, I. Evtimov, E. Fernandes, B. Li, A. Rahmati, C. Xiao, A. Prakash,T. Kohno, and D. Song, “Robust physical-world attacks on deep learning visual clas-sification,” in CVPR, 2018.

[85] M. Sharif, S. Bhagavatula, L. Bauer, and M. K. Reiter, “Accessorize to a crime: Realand stealthy attacks on state-of-the-art face recognition,” in CCS, 2016.

[86] A. Athalye, L. Engstrom, A. Ilyas, and K. Kwok, “Synthesizing robust adversarialexamples,” in ICML, 2018.

[87] M. Sharif, S. Bhagavatula, L. Bauer, and M. K. Reiter, “A general framework foradversarial examples with objectives,” ACM Transactions on Privacy and Security(TOPS), vol. 22, no. 3, pp. 1–30, 2019.

[88] Y. Zhao, H. Zhu, R. Liang, Q. Shen, S. Zhang, and K. Chen, “Seeing isn’t be-lieving: Practical adversarial attack against object detectors,” arXiv preprint arX-iv:1812.10217, 2018.

[89] L. Huang, C. Gao, Y. Zhou, C. Xie, A. L. Yuille, C. Zou, and N. Liu, “Universalphysical camouflage attacks on object detectors,” in CVPR, 2020.

[90] Z. Kong, J. Guo, A. Li, and C. Liu, “Physgan: Generating physical-world-resilientadversarial examples for autonomous driving,” in CVPR, 2020.

[91] R. Duan, X. Ma, Y. Wang, J. Bailey, A. K. Qin, and Y. Yang, “Adversarial camouflage:Hiding physical-world attacks with natural styles,” in CVPR, 2020.

29

Page 30: flAI Lab fl hsðŁ1Àž®‰全的威胁风险矩阵.pdf · þþþ 8: œ”pnŁÒ— Ł;ß: þ[21]. pn Ł;ß/ ˆ˙ŁÒ›ˆpn˘—„ ÛL Ł e[22, 23, 24] ‡þ8@: (þˇ {ß¡-

[92] Z. Wang, S. Zheng, M. Song, Q. Wang, A. Rahimpour, and H. Qi, “advpattern:Physical-world attacks on deep person re-identification via adversarially transformablepatterns,” in ICCV, 2019.

[93] T. Orekondy, B. Schiele, and M. Fritz, “Knockoff nets: Stealing functionality of black-box models,” in CVPR, 2019.

[94] F. Tramer, F. Zhang, A. Juels, M. K. Reiter, and T. Ristenpart, “Stealing machinelearning models via prediction apis,” in USENIX, 2016.

[95] M. Juuti, S. Szyller, S. Marchal, and N. Asokan, “Prada: protecting against dnn modelstealing attacks,” in EuroS&P. IEEE, 2019.

[96] H. Yu, K. Yang, T. Zhang, Y.-Y. Tsai, T.-Y. Ho, and Y. Jin, “Cloudleak: Large-scaledeep learning models stealing through adversarial examples,” in NDSS, 2020.

[97] https://github.com/Kayzaks/HackingNeuralNetworks.

[98] https://github.com/Kayzaks/HackingNeuralNetworks/.

[99] R. Stevens, O. Suciu, A. Ruef, S. Hong, M. Hicks, and T. Dumitras, “Summoningdemons: The pursuit of exploitable bugs in machine learning,” arXiv preprint arX-iv:1701.04739, 2017.

[100] D. Su, H. Zhang, H. Chen, J. Yi, P.-Y. Chen, and Y. Gao, “Is robustness the costof accuracy?–a comprehensive study on the robustness of 18 deep image classificationmodels,” in ECCV, 2018.

[101] D. Rolnick and K. P. Kording, “Identifying weights and architectures of unknown relunetworks,” arXiv preprint arXiv:1910.00744, 2019.

[102] S. Hong, M. Davinroy, Y. Kaya, S. N. Locke, I. Rackow, K. Kulda, D. Dachman-Soled,and T. Dumitras, “Security analysis of deep neural networks operating in the presenceof cache side-channel attacks,” arXiv preprint arXiv:1810.03487, 2018.

[103] S. Chen, F. He, X. Huang, and K. Zhang, “Attack on multi-node attention for objectdetection,” arXiv preprint arXiv:2008.06822, 2020.

[104] S. Chen, P. Zhang, C. Sun, J. Cai, and X. Huang, “Generate high-resolution adversarialsamples by identifying effective features,” arXiv preprint arXiv:2001.07631, 2020.

[105] C. Xie, J. Wang, Z. Zhang, Y. Zhou, L. Xie, and A. Yuille, “Adversarial examples forsemantic segmentation and object detection,” in ICCV, 2017.

[106] J. Li, R. Ji, H. Liu, X. Hong, Y. Gao, and Q. Tian, “Universal perturbation attackagainst image retrieval,” in ICCV, 2019.

30

Page 31: flAI Lab fl hsðŁ1Àž®‰全的威胁风险矩阵.pdf · þþþ 8: œ”pnŁÒ— Ł;ß: þ[21]. pn Ł;ß/ ˆ˙ŁÒ›ˆpn˘—„ ÛL Ł e[22, 23, 24] ‡þ8@: (þˇ {ß¡-

[107] C. Sun, S. Chen, J. Cai, and X. Huang, “Type i attack for generative models,” arXivpreprint arXiv:2003.01872, 2020.

[108] S. Tang, X. Huang, M. Chen, C. Sun, and J. Yang, “Adversarial attack type I: Cheatclassifiers by significant changes,” IEEE Transactions on Pattern Analysis and Ma-chine Intelligence, 2019.

[109] Y.-C. Lin, Z.-W. Hong, Y.-H. Liao, M.-L. Shih, M.-Y. Liu, and M. Sun, “Tactic-s of adversarial attack on deep reinforcement learning agents,” arXiv preprint arX-iv:1703.06748, 2017.

[110] S. Chen, Z. He, C. Sun, and X. Huang, “Universal adversarial attack on attention andthe resulting dataset damagenet,” arXiv preprint arXiv:2001.06325, 2020.

[111] J. Han, X. Dong, R. Zhang, D. Chen, W. Zhang, N. Yu, P. Luo, and X. Wang, “Once aman: Towards multi-target attack via learning multi-target adversarial network once,”in ICCV, 2019.

[112] J. Sun, T. Chen, G. Giannakis, and Z. Yang, “Communication-efficient distributedlearning via lazily aggregated quantized gradients,” in NeurIPS, 2019.

[113] F. He, X. Huang, K. Lv, and J. Yang, “A communication-efficient distributed algorithmfor kernel principal component analysis,” arXiv preprint arXiv:2005.02664, 2020.

31