[C3] H. Nguyen, Pham-Khoi Dong, and X.-T. Tran, “A Reconfigurable Multi-
function DMA Controller for High-Performance Computing Systems,” Nov.
2018, pp. 344-349. doi: 10.1109/NICS.2018.8606841.
121
[10]
[11]
[12]
[13]
TAI LIEU THAM KHAO
R. Alharbi and D. Aspinall, “An IoT Analysis Framework: An Investigation of IoT Smart Cameras' Vulnerabilities,” p. 47 (10 pp.)-47 (10 pp.), Jan.
2018, doi: 10.1049/cp.2018.0047.
“An Overview of Security in Internet of Things,” Procedia Comput. Sci., vol.
143, pp. 744-748, Jan. 2018, doi: 10.1016/j.procs.2018.10.439.
S. L. Keoh, S. S. Kumar, and H. Tschofenig, “Securing the Internet of Things:
A Standardization Perspective,” [EEE Internet Things J., vol. 1, no. 3, pp. 265—
275, Jun. 2014, doi: 10.1109/JIOT.2014.2323395.
E. Metula and A. Labs, “Internet Of Things (IOT) InSecurity”, 2015.
V. Thakor, M. A. Razzaque, and M. Khandaker, “Lightweight Cryptography Algorithms for Resource-Constrained IoT Devices: A Review, Comparison and Research Opportunities,” IEEE Access, vol. 9, pp. 28177-28193, Jan.
2021, doi: 10.1109/ACCESS.2021.3052867.
V.- Hoang, V.- Dao, and C.- Pham, “Design of ultra-low power AES encryption cores with silicon demonstration in SOTB CMOS process,” Electron. Lett., vol.
53, no. 23, pp. 1512-1514, 2017, doi: 10.1049/el.2017.2151.
D.-H. Bui, D. Puschini, S. Bacles-Min, E. Beigne, and X.-T. Tran, “AES Datapath Optimization Strategies for Low-Power Low-Energy Multisecurity- Level Internet-of-Things Applications,” JEEE Trans. Very Large Scale Integr.
VLSI Syst., vol. 25, no. 12, pp. 3281-3290, Dec. 2017, doi:
10.1109/TVLSI.2017.2716386.
D. Swessi and H. Idoudi, “A Survey on Internet-of-Things Security: Threats and Emerging Countermeasures,” Wirel. Pers. Commun., vol. 124, no. 2, pp.
1557-1592, May 2022, doi: 10.1007/s11277-021-09420-0.
P. Williams, I. K. Dutta, H. Daoud, and M. Bayoumi, “A survey on security in internet of things with a focus on the impact of emerging technologies,”
Internet Things, vol. 19, p. 100564, Aug. 2022, doi: 10.1016/j.iot.2022.100564.
“Cryptography and Network Security: Principles and Practice, 7th Edition.”
https://www.pearson.com/us/higher-education/program/Stallings- Cryptography-and-Network-Security-Principles-and-Practice-7th- Edition/PGM334401.html
N. I. of S. and Technology, “Advanced Encryption Standard (AES),” U.S.
Department of Commerce, Federal Information Processing Standard (FIPS) 197, Nov. 2001. doi: 10.6028/NIST.FIPS.197.
D. Bandyopadhyay and J. Sen, “Internet of Things: Applications and Challenges in Technology and Standardization,” Wirel. Pers. Commun., vol.
58, pp. 49-69, May 2011, doi: 10.1007/s11277-011-0288-5.
“Three Major Challenges Facing IoT - IEEE Internet of Things.”
https://Aot.ieee.org/newsletter/march-2017/three-major-challenges-facing- iot.html, 2017.
122
[14]
[15]
[16]
[17]
[18]
[19]
[20]
[21]
[22]
[23]
[24]
[25]
[26]
[27]
W. Feng, Y. Qin, S. Zhao, and D. Feng, “AAoT: Lightweight Attestation and Authentication of low-resource Things in IoT and CPS,” Comput. Netw., vol.
134, Feb. 2018, doi: 10.1016/j.comnet.2018.01.039.
“Lightweight Block Ciphers for loT: Energy Optimization and Survivability Techniques | JEEE Journals & Magazine | JEEE Xplore.”
https://ieeexplore.ieee.org/document/8387774, 2020.
J. Daor, J. Daemen, and V. Rijmen, “AES proposal: rijndael,” Oct. 1999.
K. Gaj and P. Chodowiec, “FPGA and ASIC implementations of AES,” in Cryptographic engineering, Springer, 2009, pp. 235-294.
“Low Power Methodology Manual:For System-on-Chip Design | Guide books.”. Available: https://dl.acm.org/doi/10.5555/1534835, 2007.
K. Gaj and P. Chodowiec, “Fast Implementation and Fair Comparison of the Final Candidates for Advanced Encryption Standard Using Field Programmable Gate Arrays,” presented at the LNCS, May 2001. doi:
10.1007/3-540-45353-9_ 8.
“An FPGA-based performance evaluation of the AES block cipher candidate algorithm finalists.”
https://www.computer.org/csdl/journal/si/200 1/04/0093 1230/13rRUwI5Udn
T. Ichikawa, T. Kasuya, and M. Matsui, “Hardware Evaluation of the AES Finalists,”. https://www.semanticscholar.org/paper/Hardware-Evaluation-of- the-AES-Finalists-Ichikawa-
Kasuya/94b7f655b353360ab1ec5341a176bb3f39bfa580
N. Iyer, P. Anandmohan, D. Poornaiah, and V. Kulkarni, “High Throughput, low cost, Fully Pipelined Architecture for AES Crypto Chip,” 2006 Annu. IEEE India Conf., 2006, doi: 10.1109/INDCON.2006.3028 14.
K. B. Anuroop and M. Neema, “Fully pipelined-loop unrolled AES with enhanced key expansion,” in 2016 IEEE International Conference on Recent Trends in Electronics, Information Communication Technology (RTEICT), May 2016, pp. 988-992. doi: 10.1 109/RTEICT.2016.7807977.
H. QIN, T. Sasao, and Y. Iguchi, “A design of AES encryption circuit with 128- bit keys using look-up table ring on FPGA,” JEICE Trans. Inf. Syst., vol. E89D, Mar. 2006, doi: 10.1093 /ietisy/e89-d.3.1139.
A. A. Pammu, W. Ho, N. K. Z. Lwin, K. Chong, and B. Gwee, “A High Throughput and Secure Authentication-Encryption AES-CCM Algorithm on Asynchronous Multicore Processor,” IEEE Trans. Inf. Forensics Secur., vol.
14, no. 4, pp. 1023-1036, Apr. 2019, doi: 10.1109/TIFS.2018.2869344.
L. Henzen and W. Fichtner, “FPGA parallel-pipelined AES-GCM core for 100G Ethernet applications,” in 2010 Proceedings of ESSCIRC, Sep. 2010, pp.
202-205. doi: 10.1109/ESSCIRC.2010.5619894.
K. Jãrvinen, M. Tommiska, and J. Skytta, “A fully pipelined memoryless 17.8 Gbps AES-128 encryptor,’ Jan. 2003, pp. 207-215. doi:
10.1145/611817.611848.
123
[28]
[29]
[30]
[31]
[32]
[33]
[34]
[35]
[36]
[37]
[38]
[39]
S. Yoo, D. Kotturi, W. Pan, and J. Blizzard, “An AES crypto chip using a high- speed parallel pipelined architecture,” Microprocess Microsyst., 2005, doi:
10.1016/j.micpro.2004.12.001.
C. Wang and H. M. Heys, “Using a pipelined S-box in compact AES hardware implementations,” in Proceedings of the 8th IEEE International NEWCAS
Conference 2010, Jun. 2010, pp. 101-104. doi:
10.1109/NEWCAS.2010.5603920.
B. Buhrow, K. Fritz, B. Gilbert, and E. Daniel, “A highly parallel AES-GCM core for authenticated encryption of 400 Gb/s network protocols,” in 20/5 International Conference on ReConFigurable Computing and FPGAs
(ReConFig), Dec. 2015, pp. 1-7. doi: 10.1109/ReConFig.2015.7393321.
Q. Liu, Z. Xu, and Y. Yuan, “High throughput and secure advanced encryption standard on field programmable gate array with fine pipelining and enhanced key expansion,” JET Comput. Digit. Tech., vol. 9, no. 3, pp. 175-184, 2015, doi: 10.1049/iet-cdt.2014.0101.
S. Oukili and S. Bri, “High speed efficient advanced encryption standard implementation,” in 2017 International Symposium on Networks, Computers and Communications (lSNCC) May 2017, pp. 1-4. doi:
10.1109/ISNCC.2017.8071975.
J. Vliegen, O. Reparaz, and N. Mentens, “Maximizing the throughput of threshold-protected AES-GCM implementations on FPGA,” in 20/7 IEEE 2nd International Verification and Security Workshop (IVSW), Jul. 2017, pp. 140—
145. doi: 10.1109/IVSW.2017.8031559.
Y. Wang and Y. Ha, “High throughput and resource efficient AES encryption/decryption for SANs,” in 2016 IEEE International Symposium on Circuits and Systems (ISCAS), May 2016, pp. 1166-1169. doi:
10.1109/ISCAS.2016.7527453.
“Low Power Design Techniques | Basic Concept of chip design - Truechip.”
Accessed: Sep. 09, 2021. https://www.truechip.net/articles-details/low-power- design-techniques-basics-concepts-in-chip-design/26234
S. Mathew et al., “Hardware accelerator with area-optimized encrypt/decrypt
GF(2*)? polynomials in 22nm tri-gate CMOS,” in 2014 Symposium on VLSI
Circuits Digest of Technical Papers, Honolulu, HI, USA: IEEE, Jun. 2014, pp.
1-2. doi: 10.1109/VLSIC.2014.6858420.
W. Zhao, Y. Ha, and M. Alioto, “Novel Self-Body-Biasing and Statistical Design for Near-Threshold Circuits With Ultra Energy-Efficient AES as Case Study,” JEEE Trans. Very Large Scale Integr. VLSI Syst., vol. 23, no. 8, pp.
1390-1401, Aug. 2015, doi: 10.1109/TVLSI.2014.2342932.
L. Dong, N. Wu, and X. Zhang, “Low Power State Machine Design for AES Encryption Coprocessor,”
D. Canright, “A Very Compact S-Box for AES,” in Cryptographic Hardware and Embedded Systems — CHES 2005, J. R. Rao and B. Sunar, Eds., in Lecture Notes in Computer Science. Springer Berlin Heidelberg, 2005, pp. 441-455.
124
[40]
[41]
[42]
[43]
[44]
[45]
[46]
[47]
[48]
[49]
[50]
[51]
[52]
[53]
[54]
“On the Optimum Constructions of Composite Field for the AES Algorithm - IEEE Journals & Magazine.” https://ieeexplore.ieee.org/document/1715596
M. M. Wong, M. L. D. Wong, A. K. Nandi, and I. Hijazin, “Construction of Optimum Composite Field Architecture for Compact High-Throughput AES S-Boxes,” IEEE Trans. Very Large Scale Integr. VLSI Syst., vol. 20, no. 6, pp.
1151-1155, Jun. 2012, doi: 10.1109/TVLSI.2011.2141693.
X. Zhang and C. Zeng, “Compact S-box Hardware Implementation with an Efficient MVP-CSE Algorithm.”
Y. Z. Fang Zhou, Ning Wu, Yasir, “S-box Optimization for SM4 Algorithm.”
Neuromorphic Computing Principles and Organization.
https://link.springer.com/book/10.1007/978-3-030-92525-3
S. Haykin, Neural Networks: A Comprehensive Foundation, Subsequent edition. Upper Saddle River, N.J: Prentice Hall, 1998.
T. H. Vu, R. Murakami, Y. Okuyama, and A. Ben Abdallah, “Efficient Optimization and Hardware Acceleration of CNNs towards the Design of a Scalable Neuro inspired Architecture in Hardware,” in 2018 IEEE International Conference on Big Data and Smart Computing (BigComp), Jan.
2018, pp. 326-332. doi: 10.1109/BigComp.2018.00055.
T. H. Vu, O. M. Ikechukwu, and A. Ben Abdallah, “Fault-Tolerant Spike Routing Algorithm and Architecture for Three Dimensional NoC-Based Neuromorphic Systems,” JEEE Access, vol. 7, pp. 90436-90452, 2019, doi:
10.1109/ACCESS.2019.2925085.
H.-T. Vu, Y. Okuyama, and A. Abdallah, “Comprehensive Analytic Performance Assessment and K-means based Multicast Routing Algorithm and Architecture for 3D-NoC of Spiking Neurons,’ ACM J. Emerg. Technol.
Comput. Syst., vol. 15, pp. 1-28, Oct. 2019, doi: 10.1145/3340963.
W. Gerstner and W. M. Kistler, Spiking Neuron Models: Single Neurons, Populations, Plasticity. Cambridge: Cambridge University Press, 2002. doi:
10.1017/CBO97805 11815706.
E. M. Izhikevich, “Which model to use for cortical spiking neurons?,” [EEE Trans. Neural Netw., vol. 15, no. 5, pp. 1063-1070, Sep. 2004, doi:
10.1109/TNN.2004.832719.
A. L. Hodgkin and A. F. Huxley, “A quantitative description of membrane current and its application to conduction and excitation in nerve,” J. Physiol., vol. 117, no. 4, pp. 500-544, 1952, doi: 10.1113/jphysiol.1952.sp004764.
E. M. Izhikevich, “Simple model of spiking neurons,” JEEE Trans. Neural Netw., vol. 14, no. 6, pp 1569-1572, Oct. 2003, doi:
10.1109/TNN.2003.820440.
A. N. Burkitt, “A Review of the Integrate-and-fire Neuron Model: I.
Homogeneous Synaptic Input,” Biol. Cybern., vol. 95, no. 1, pp. 1-19, Jul.
2006, doi: 10.1007/s00422-006-0068-6.
P. U. Diehl, D. Neil, J. Binas, M. Cook, S.-C. Liu, and M. Pfeiffer, “Fast- classifying, high-accuracy spiking deep networks through weight and threshold
125
[55]
[56]
[57]
[58]
[59]
[60]
[61]
[62]
[63]
[64]
balancing,” in 2015 International Joint Conference on Neural Networks (IJCNN), Jul. 2015, pp. 1-8. doi: 10.1109/IJCNN.2015.7280696.
P. Diehl and M. Cook, “Unsupervised learning of digit recognition using spike- timing-dependent plasticity,’ Front. Comput. Neurosci., vol. 9, 2015.
https://www.frontiersin.org/article/10.3389/fncom.2015.00099
N. Kasabov, “Evolving Spiking Neural Networks and Neurogenetic Systems for Spatio- and Spectro-Temporal Data Modelling and Pattern Recognition,” in Advances in Computational Intelligence: IEEE World Congress on
Computational Intelligence, WCCI 2012, Brisbane, Australia, June 10-15, 2012. Plenary/Invited Lectures, J. Liu, C. Alippi, B. Bouchon-Meunier, G. W.
Greenwood, and H. A. Abbass, Eds., in Lecture Notes in Computer Science. , Berlin, Heidelberg: Springer, 2012, pp. 234-260. doi: 10.1007/978-3-642- 30687-7_12.
Y. Dan and M. Poo, “Spike Timing-Dependent Plasticity of Neural Circuits,”
Neuron, vol. 44, no. 1, pp. 23-30, Sep. 2004, doi:
10.1016/j.neuron.2004.09.007.
X. Jin, A. Rast, F. Galluppi, S. Davies, and S. Furber, “Implementing spike- timing-dependent plasticity on SpiNNaker neuromorphic hardware,” presented
at the The 2010 International Joint Conferences on Neural Networks (IJCNN), Jul. 2010, pp. 1-8. doi: 10.1 109/IJCNN.2010.5596372.
Z. Yang, A. Murray, F. Worgotter, K. Cameron, and V. Boonsobhak, “A neuromorphic depth-from-motion vision model with STDP adaptation,” [EEE
Trans. Neural Netw., vol. 17, no. 2, pp. 482-495, Mar. 2006, doi:
10.1109/TNN.2006.871711.
S. Song, K. D. Miller, and L. F. Abbott, “Competitive Hebbian learning through spike-timing-dependent synaptic plasticity,” Nat. Neurosci., vol. 3, no. 9, Art.
no. 9, Sep. 2000, doi: 10.1038/78829.
C. Frenkel, M. Lefebvre, J.-D. Legat, and D. Bol, “A 0.086-mm$^2$ 12.7-
pJ/SOP 64k-Synapse 256-Neuron Online-Learning Digital Spiking Neuromorphic Processor in 28-nm CMOS,” JEEE Trans. Biomed. Circuits
Syst., vol. 13, no. 1, pp. 145-158, Feb. 2019, doi:
10.1109/TBCAS.2018.2880425.
K. A. Boahen, “Communicating Neuronal Ensembles between Neuromorphic Chips,” in Neuromorphic Systems Engineering: Neural Networks in Silicon, T.
S. Lande, Ed., in The Springer International Series in Engineering and Computer Science, Boston, MA: Springer US, 1998, pp. 229-259. doi:
10.1007/978-0-585-28001-1_11.
P. Merolla, J. Arthur, R. Alvarez, J.-M. Bussat, and K. Boahen, “A Multicast Tree Router for Multichip Neuromorphic Systems,” [EEE Trans. Circuits Syst.
Regul. Pap., vol. 61, no. 3, pp. 820-833, Mar. 2014, doi:
10.1109/TCSI.2013.2284184.
A. Ahmed and A. Abdallah, “Architecture and design of high-throughput, low- latency, and fault-tolerant routing algorithm for 3D-network-on-chip (3D- NoC),” J. Supercomput., vol. 66, Dec. 2013, doi: 10.1007/s11227-013-0940-9.
126
[65]
[66]
[67]
[68]
[69]
[70]
[71]
[72]
[73]
[74]
[75]
[76]
[77]
[78]
A. Abdallah, Advanced Multicore Systems-On-Chip: Architecture, On-Chip Network, Design. 2017. doi: 10.1007/978-98 1-10-6092-2.
A. Ahmed and A. Abdallah, “Graceful deadlock-free fault-tolerant routing algorithm for 3D Network-on-Chip architectures,’ J. Parallel Distrib.
Comput., vol. 74, Apr. 2014, doi: 10.1016/J.jpdc.2014.01.002.
A. Mortara, E. A. Vittoz, and P. Venier, “A communication scheme for analog VLSI perceptive systems,” JEEE J. Solid-State Circuits, vol. 30, no. 6, pp. 660—
669, Jun. 1995, doi: 10.1109/4.387069.
F. Akopyan et al., “TrueNorth: Design and Tool Flow of a 65 mW 1 Million Neuron Programmable Neurosynaptic Chip,” Comput.-Aided Des. Integr.
Circuits Syst. IEEE Trans. On, vol. 34, pp. 1537-1557, 2015, doi:
10.1109/TCAD.2015.2474396.
M. Davies et al., “Loihi: A Neuromorphic Manycore Processor with On-Chip Learning,” IEEE Micro, vol. 38, no. 1, pp. 82-99, Jan. 2018, doi:
10.1109/MM.2018.112130359.
W. Maass and C. M. Bishop, Eds., Pulsed Neural Networks. Cambridge, Mass.:
MIT Press, 2001.
S. B. Furber, F. Galluppi, S. Temple, and L. A. Plana, “The SpiNNaker Project,” Proc. IEEE, vol. 102, no. 5, pp. 652-665, May 2014, doi:
10.1109/JPROC.2014.2304638.
F. M. M. ul Islam and M. Lin, “Hybrid DVFS Scheduling for Real-Time Systems Based on Reinforcement Learning,” JEEE Syst. J., vol. 11, no. 2, pp.
931-940, Jun. 2017, doi: 10.1109/JSYST.2015.2446205.
D. Liu, S.-G. Yang, Z. He, M. Zhao, and W. Liu, “CARTAD: Compiler- Assisted Reinforcement Learning for Thermal-Aware Task Scheduling and DVFS on Multicores,” IEEE Trans. Comput.-Aided Des. Integr. Circuits Syst., pp. 1-1, 2021, doi: 10.1109/TCAD.2021.3095028.
H. Jung and M. Pedram, “Supervised Learning Based Power Management for Multicore Processors,” JEEE Trans. Comput.-Aided Des. Integr. Circuits Syst., vol. 29, no. 9, pp. 1395-1408, Sep. 2010, doi: 10.1109/TCAD.2010.2059270.
L. Deng ef al., “Rethinking the performance comparison between SNNS and
ANNS,” Neural Netw., vol. 121, pp. 294-307, doi:
10.1016/j.neunet.2019.09.005.
“Power Of 5G Speed: This Doctor Performed Remote Brain Surgery On A
Patient 3,000 Km Away,” IndiaTimes.
https://www.indiatimes.com/technology/science-and-future/chinese-doctor-
used-5g-to-perform-world-s- 1 st-remote-brain-surgery-on-a-patient-3-000-km- away-364008.html
H. Elayan, O. Amin, B. Shihada, R. M. Shubair, and M.-S. Alouini, “Terahertz Band: The Last Piece of RF Spectrum Puzzle for Communication Systems,”
IEEE Open J. Commun. Soc., vol. 1, pp. 1-32, 2020, doi:
10.1109/OJCOMS.2019.2953633.
“Minimum requirements related to technical performance for IMT-2020 radio interface.” IMT standard, 2019.
127
[79]
[80]
[81]
[82]
[83]
[84]
[85]
[86]
[87]
[88]
[89]
[90]
[91]
Dinesh, “China Claims a New 6G Speed Record of 206.25 Gbps,”
NepaliTelecom. https://www.nepalitelecom.com/2022/01/china-claims-a-
new-6g-speed-record-breakthrough.html
A. Hodjat and I. Verbauwhede, “Area-throughput trade-offs for fully pipelined 30 to 70 Gbits/s AES processors,” IEEE Trans. Comput., vol. 55, no. 4, pp.
366-372, Apr. 2006, doi: 10.1109/TC.2006.49.
S. K. Mathew et al., “53 Gbps Native GF(2*)’ Composite-Field AES-
Encrypt/Decrypt Accelerator for Content-Protection in 45 nm _ High- Performance Microprocessors,” IEEE J. Solid-State Circuits, vol. 46, no. 4, pp.
767-776, Apr. 2011, doi: 10.1109/JSSC.2011.2108131.
G. Sayilar and D. Chiou, “Cryptoraptor: High throughput reconfigurable cryptographic processor,” in 20/4 IEEE/ACM International Conference on Computer-Aided Design (ICCAD), Nov. 2014, pp. 155-161. doi:
10.1 109/ICCAD.2014.7001346.
P. Liu, J. Hsiao, H. Chang, and C. Lee, “A 2.97 Gb/s DPA-resistant AES engine with self-generated random sequence,” in 2011 Proceedings of the ESSCIRC
(ESSCIRC), Sep. 2011, pp. 71-74. doi: 10.1109/ESSCIRC.2011.6044917.
K. Rahimunnisa, P. Karthigaikumar, N. A. Christy, S. S. Kumar, and J.
Jayakumar, “PSP: Parallel sub-pipelined architecture for high throughput AES on FPGA and ASIC,” Cent. Eur. J. Comput. Sci., vol. 3, no. 4, pp. 173-186, Dec. 2013, doi: 10.2478/s13537-013-0112-2.
B. Erbagci, N. E. C. Akkaya, C. Teegarden, and K. Mai, “A 275 Gbps AES encryption accelerator using ROM-based S-boxes in 65nm,” in 2015 IEEE
Custom Integrated Circuits Conference (CICC), Sep. 2015, pp. 1-4. doi:
10.1109/CICC.2015.7338448.
Y.-H. Chou and S.-L. L. Lu, “A High Performance, Low Energy, Compact Masked 128-Bit AES in 22nm CMOS Technology,” in 2019 International Symposium on VLSI Design, Automation and Test (VLSI-DAT), Apr. 2019, pp.
1-4. doi: 10.1109/VLSI-DAT.2019.8741835.
L. Ali, I. Aris, F. S. Hossain, and N. Roy, “Design of an ultra high speed AES processor for next generation IT security,” Comput. Electr. Eng., vol. 37, no. 6, pp. 1160-1170, Nov. 2011, doi: 10.1016/J.compeleceng.201 1.06.003.
“8023-2018 - IEEE Standard for Ethernet - IEEE Standard.”
https://1eeexplore.Ieee.org/document/8457469
P.-K. Dong, H. K. Nguyen, and X.-T. Tran, “A 45nm High-Throughput and Low Latency AES Encryption for Real-Time Applications,” in 2079 19th International Symposium on Communications and Information Technologies
(ISCIT), Sep. 2019, pp. 196-200. doi: 10.1109/ISCIT.2019.8905235.
S. Hesham, M. A. A. E. Ghany, and K. Hofmann, “High throughput architecture for the Advanced Encryption Standard Algorithm,” in /7th International Symposium on Design and Diagnostics of Electronic Circuits Systems, Apr. 2014, pp. 167-170. doi: 10.1109/DDECS.2014.6868783.
A. A. Abdelrahman, M. M. Fouad, H. Dahshan, and A. M. Mousa, “High performance CUDA AES implementation: A quantitative performance analysis
128
approach,” in 2017 Computing Conference, Jul. 2017, pp. 1077-1085. doi:
10.1109/SAI.2017.8252225.
[92] N. Nishikawa, H. Amano, and K. Iwai, “Implementation of Bitsliced AES Encryption on CUDA-Enabled GPU,” Jul. 2017, pp. 273-287. doi:
10.1007/978-3-319-64701-2_20.
[93] O. Hajihassani, S. K. Monfared, S. H. Khasteh, and S. Gorgin, “Fast AES Implementation: A High-throughput Bitsliced Approach,” JEEE Trans.
Parallel Distrib. Syst., pp. 1-1, 2019, doi: 10.1109/TPDS.2019.2911278.
[94] N. Nishikawa, K. Iwai, H. Tanaka, and T. Kurokawa, “Throughput and Power Efficiency Evaluation of Block Ciphers on Kepler and GCN GPUs Using Micro-Benchmark Analysis,” JEICE Trans. Inf. Syst., vol. E97.D, no. 6, pp.
1506-1515, 2014, doi: 10.1587/transinf.E97.D.1506.
[95] A. Barnes, R. Fernando, K. Mettananda, and R. Ragel, “Improving the throughput of the AES algorithm with multicore processors,” in 2012 IEEE 7th International Conference on Industrial and Information Systems (ICIS), Aug.
2012, pp. 1-6. doi: 10.1109/ICIInfS.2012.6304791.
[96] D. A. McGrew and J. Viega, “The Security and Performance of the Galois/Counter Mode (GCM) of Operation,” in Progress in Cryptology - INDOCRYPT 2004, A. Canteaut and K. Viswanathan, Eds., in Lecture Notes in Computer Science. Berlin, Heidelberg: Springer, 2005, pp. 343-355. doi:
10.1007/978-3-540-30556-9_27.
[97] W. Guo, M. E. Fouda, A. M. Eltawil, and K. N. Salama, “Neural Coding in Spiking Neural Networks: A Comparative Study for Robust Neuromorphic
Systems,” Front. Neurosci., vol. 15, 2021,
https://www.frontiersin.org/articles/10.3389/fnins.202 1.638474
[98] B. V. Benjamin et al., “Neurogrid: A Mixed-Analog-Digital Multichip System for Large-Scale Neural Simulations,” Proc. IEEE, vol. 102, no. 5, pp. 699-716, May 2014, doi: 10.1109/JPROC.2014.2313565.
[99] S. Davidson and S. B. Furber, “Comparison of Artificial and Spiking Neural Networks on Digital Hardware,” Front. Neurosci., vol. 15, 2021, https://www.frontiersin.org/article/10.3389/fnins.2021.651141
[100] S. Furber, “Large-scale neuromorphic computing systems,” J. Neural Eng., vol.
13, Aug. 2016, doi: 10.1088/1741-2560/13/5/051001.
[101] C. Lee, S. S. Sarwar, P. Panda, G. Srinivasan, and K. Roy, “Enabling Spike- Based Backpropagation for Training Deep Neural Network Architectures,”
Front. Neurosci., vol. 14, 2020,
https://www.frontiersin.org/articles/10.3389/fnins.2020.00119
[102] Y. Wu, L. Deng, G. Li, J. Zhu, Y. Xie, and L. P. Shi, “Direct Training for Spiking Neural Networks: Faster, Larger, Better,” Proc. AAAI Conf. Artif.
Intell., vol. 33, pp. 1311-1318, Jul. 2019, doi: 10.1609/aaai.v33i01.33011311.
[103] H. Qiao, J. Chen, and X. Huang, “A Survey of Brain-Inspired Intelligent Robots: Integration of Vision, Decision, Motion Control, and Musculoskeletal
Systems,” IEEE Trans. Cybern., pp. 1-14, 2021, doi:
10.1109/TCYB.2021.3071312.
129
[104] M. Pfeiffer and T. Pfeil, “Deep Learning With Spiking Neurons: Opportunities and Challenges,” Front. Neurosci., vol. 12, pp. 1-13, 2018.
[105] B. Rueckauer, I.-A. Lungu, Y. Hu, M. Pfeiffer, and S.-C. Liu, “Conversion of Continuous-Valued Deep Networks to Efficient Event-Driven Networks for
Image Classification,” Front. Neurosci., vol. 11, pp. 1-12, 2017.
[106] A. Ben Abdallah and K. N. Dang, “Toward Robust Cognitive 3D Brain- Inspired Cross-Paradigm System,” Front. Neurosci., vol. 15, 2021, https://www.frontiersin.org/article/10.3389/fnins.202 1.690208
[107] M. Davies et al., “Advancing Neuromorphic Computing With Loihi: A Survey of Results and Outlook,” Proc. IEEE, vol. 109, no. 5, pp. 911-934, May 2021, doi: 10.1109/JPROC.2021.3067593.
[108] Fang, Wei and Chen, Yanqi and Ding, Jianhao and Chen, Ding and Yu, Zhaofei and Zhou, Huihui and Tian, Yonghong and other contributors, “SpikingJelly.”
Feb. 15, 2022. https://github.com/fangwei123456/spikingjelly
[109] W. Chouchene, A. Brahim, A. Zitouni, N. Abid, and R. Tourki, “A low power network interface for network on chip,” Int. Multi-Conf. Syst. Signals Devices SSDII - Summ. Proc., Mar. 2011, doi: 10.1109/SSD.2011.5767464.
[110] C. Ababei and N. Mastronarde, “Benefits and costs of prediction based DVFS for NoCs at router level,” in 20/4 27th IEEE International System-on-Chip
Conference (SOCC), Sep. 2014, pp. 255-260. doi:
10.1109/SOCC.2014.6948937.
[111] H. Zakaria and L. Fesquet, “Process variability robust energy-efficient control for nano-scaled complex SoCs,” in 10th Edition of Faible Tension Faible Consommation (FTFC’I1), Marrakech, Morocco: IEEE Computer Society, May 2011, pp. 95—98. doi: 10.1109/FTFC.2011.5948928.
[112] P. Pande, C. Grecu, M. Jones, A. Ivanov, and R. Saleh, “Performance Evaluation and Design Trade-Offs for Network-on-Chip Interconnect Architectures,” Comput. IEEE Trans. On, vol. 54, pp. 1025-1040, Sep. 2005, doi: 10.1109/TC.2005.134.
[113] H.-P. Phan, X.-T. Tran, and T. Yoneda, “Power consumption estimation using VNOC2.0 simulator for a fuzzy-logic based low power Network-on-Chip,” in
2017 IEEE International Conference on IC Design and Technology (ICICDT), May 2017, pp. 1-4. doi: 10.1109/ICICDT.2017.7993515.
[114] R. Samanth, C. Chaitanya, and G. S. Nayak, “Power Reduction of a Functional unit using RT-Level Clock-Gating and Operand Isolation,” in 20/9 IEEE International Conference on Distributed Computing, VLSI, Electrical Circuits
and Robotics (DISCOVER), Aug. 2019, pp. 1-4. doi:
10.1109/DISCOVER47552.2019.9008025.
[115] K. N. Dang, M. Meyer, Y. Okuyama, and A. B. Abdallah, “A low-overhead soft-hard fault-tolerant architecture, design and management scheme for reliable high-performance many-core 3D-NoC systems,” J. Supercomput., vol.
73, no. 6, pp. 2705-2729, Jun. 2017, doi: 10.1007/s11227-016-1951-0.
130
Phu lục A: Mô tả các kịch bản dữ liệu
Hình Kịch bản dữ liệu Mô tả
4(a) Sin function The data rate is a 1-cycle sine function:
for t in range(samp_of_data):
Data_in(t) = k + k*sin(2 * pi * 2 * cycle * t/samp_of_data), where k = 50 is the coefficient, cycle = 1 is the number of cycles of the sin function. The maximum data rate is 100 data/sample, the minimum data rate is 0 data/sample.
4(b) Sin function 4 cycles
The data rate is a sine function: for t in range(samp_of_data):
Data_in(t) = k + k* sin( * pi * cycle * t/samp_of_data), where k = 50 is the
coefficient, cycle = 4 is the number of cycles of the sin function. The maximum data rate is 100 data/sample, the minimum data rate is 0 data/sample.
4(c) Tan function The data rate is a tan function:
for t in range(samp_of_data):
Data_in(t) = k * tan(n*t). Where k = 10 and n = 0.03 are the coefficients. The
maximum data rate is 100 data/sample, the minimum data rate is 0 data/sample. The coefficients k and n can be changed to generate similar data scenarios.
4(d) Inv_exp function Data rate is an inverse exponential function:
for t in range(samp_of_data):
Data_in(t) = clock_in_sample — exp(n * t), where n = 0.1 is the coefficient. The maximum data rate is 100 data/sample, the minimum data rate is 0 data/sample.
Ae) Exp function Data rate is an exponential function:
for t in range(samp_of_data):
Data_in(t) = clock_in_sample — exp(n * t), where n = 0.1 is the coefficient. The maximum data rate is 100 data/sample, the minimum data rate is 0 data/sample.
4(f) Linear function The data rate is a Linear function:
for t in range(samp_of_data):
Data_in(t) = k*t, where k = 5 is the coefficient. The maximum data rate is 100 data/sample, the minimum data rate is 0 data/sample.
4(g) Quadratic function The data rate is a Quadratic function:
for t in range(samp_of_data):
Data_in(t) = k*t*t), where k = 0.01 is the coefficient. The maximum data rate is 100 data/sample, the minimum data rate is 0 data/sample. The coefficient k can be changed to generate more similar data scenarios.
4(h) Saw function The data rate is a Saw function:
for t in range(cycle); for j in range(samp_of_data//cycle), Data_in(t) = y[i * samp_of_data//cycle + j] = k* (¡* samp_of_data//cycle + j) —ixk*
samp_of_data//cycle, cycle = 2 is the number of cycles of the saw function, k =
clock_in_sample * cycle/samp_of_data is the coefficient. The maximum data rate is 100 data/sample, the minimum data rate is 0 data/sample. The coefficient k can be changed to generate more similar data scenarios.
40) Step_up function The data rate is a Step_up function:
for t in range(clock_in_sample//num_of_core); for j in range(samp_of_data//
num_of_core), Data_in(t) = y[i * samp_of_data//num_of_core + j] = i*
clock_in_sample//num_of_core. The maximum data rate is 100 data/sample, the minimum data rate is 0 data/samle. The coefficient cycle can be changed to generate more similar data scenarios.
40) Rectan_step_up function
The data rate is a Rectan_step_up function: for t in range(clock_in_sample//
num_of_core); for j in range(samp_of_data//num_of_core), Data_in(t) = y[t * samp_of_data//num_of_core + j] = t * clock_in_sample//num_of_core. The
maximum data rate is 100 data/sample, the minimum data rate is 0 data/sample. The coefficient cycle can be changed to generate more similar data scenarios.
4(k) Rectangle function The data rate is a Rectangle function:
for t in range(clock_in_sample//num_of_core); for j in range(samp_of_data//
num_of_core), Data_in(t) = y[t * samp_of_data//(num_of_core +1) + j] =
clock_in_sample. The maximum data rate is 100 data/sample, the minimum data rate is 0 data/sample.
40) Step_ down function The data rate is a step_down function:
for t in range(clock_in_sample//num_of_core); for j in range(samp_of_data//
num_of_core), Data_in(t) = y[t * samp_of_data//num_of_core + j] = (10 —t)*
131