Algorithm 1. The Distributed SLA Based Joint Channel and Time Slot Selection Algorithm
5.2 Application for ION Internetworking
Interplanetary Overlay Network (ION) is an implementation of the space inter- networking based on DTN protocol stack. Its data flow processing is shown in Fig.4 [12].
Fig. 4.ION data stream
The source code of ION-DTN contains a demo file, ionrtems.c, which imple- ments the loopback function of the BP package. However, it only transfers data packets between the Bundle layer, which does not involve using the RTEMS network protocol and the physical layer Ethernet frames.
After reconfiguration aforementioned, the pseudoshell function in ionrtems.c file is modified to obtain the command “bpdriver” and “bpcounter” as follows.
bpdriver -1000 ipn:1.1 ipn:3.1 -1000 bpcounter ipn:3.1
where the first parameter of bpdriver indicates the size of the bp packet, e.g.
1000 is 1000 KB, the second and third arguments indicate the source node and the destination node respectively, and the last argument indicates the number of the bp packets. The parameter of bpcounter indicates the source node. With the help of these two commands, the entire protocol stack is enabled to send and receive BP packets successfully.
6 Emulation Results and Discussion
According to the scheme proposed in this paper, we implement the development of emulation node network adapter driver, ION-DTN protocol stack transplan- tation and application for ION internetworking.
The performance of our designed emulation node will be evaluated via the following experiments. Most of our emulation parameters are listed in Table2.
We use ping as the diagnostic tool to test the reachability of our emulation node on the network. Taking our designed emulation node as the destination
Table 2.Emulation parameters
Parameter Setting/Value
Space Router Hardware BBB Platform Bandwidth Limit 100 Mbps Liunx host PC Ubuntu 16.04 BP Hosted Transport No
ACK Packet Size/bytes 70 LTP Block Size/bytes 240 LTP Segment Size/bytes 1400 MTU Size/bytes 1500
Number of samples Independent test 8 times BP: Bundle protocol layer MTU: Maximum transfer unit
host, we send Internet Control Message Protocol (ICMP) Echo Request pack- ets to the target host from a Linux host and wait for the ICMP Echo Replies.
Figure5is the output of running ping on Linux for sending 50 probes to the tar- get node. It lists the statistics of the entire test and the red line in Fig.5indicates the value of average round trip time (RTT). In this experiment, the first RTT spent the longest time since the source host at first needs to go through ARP broadcasting via the router to find the physical address corresponding to the tar- get host IP address. The value of other RTTs was fluctuated around the average slightly which indicates the emulation node network adapter working stably.
Fig. 5. The Emulation node sends ICMP packets
Fig. 6.FTP test
Further, we develop an FTP server program to test the transmission ability of our embedded emulation node. Here, function rtems initialize ftpd is called to initialize the ftp, and the structure rtems ftpd configuration is used for the corresponding parameters configuration, including ftp port, the maximum num- ber of connections, etc. The tests had been done after setup the FTP server
on RTEMS, and the measurement results are illustrated in Fig.6. It shows that uplink rate of network adapter on BBB board is about 32 Mb/s and the downlink rate is 90 Mb/s, which are approximate to the rate available at network adapter on BBB board.
As can be seen from Fig.6, the transfer rate is maintained at 90 Mbps, the network adapter has a good performance.
Fig. 7.BP, LTP, CFDP protocol
Finally, we do experiments to test whether the DTN protocol stack working properly on RTEMS. We send data from our emulation node to a Linux host, and both of them load the DTN protocol for data transmission. In the tests, a packet analyzer named Wireshark runs on the Linux host to capture the packets transmitted between two nodes, and some of the results are illustrated in Fig.7.
As can be seen in Fig.7, two nodes can employ BP and LTP protocol for data delivery successfully. It shows that the embedded emulation node can load DTN protocol effectively and run it properly, which makes our designed emulation node meet the requirements for space network protocol testing.
7 Conclusion
This paper proposed a scheme to design and implement an embedded emulation node, which provided an approach to emulate a precision network environment for the measurement of network protocols in the space information network. The design scheme uses the Beaglebone Black (BBB) embedded core board owned the approximate hardware performance as a satellite has, ports the real-time operating system RTEMS to BBB board, and then loads and runs the ION- DTN on it. The experiment results show that the function and performance of the designed emulation node can meet the requirements for space network protocol testing.
References
1. Beaglebone Black User Manual.https://cdn.sparkfun.com/datasheets/Dev/Beagle 2. AM335x Sitara Processors Users Guide.http://www.ti.com.cn/cn/lit/er/sprz360i/
sprz360i.pdf
3. RTEMS Ada Users Guide.https://www.rtems.org/
4. Faxin, Y.: Comparison and analysis of commonly used embedded real-time opera- tion. J. Comput. Appl.4, 761–764 (2006)
5. Dong, J., Li, Y., Yang, Q., Zhai, J.: Real time evaluation of embedded operating system oriented to space system. J. Comput. Eng. Des.1, 114–120 (2013) 6. Sun, L.: A comparative study of four popular embedded real-time operating
systems-VxWorks, QNX, ucLinux, RTEMS. J. Comput. Appl. Softw.8, 196–197 (2007)
7. Zhou, J.: Research on key technologies of space integrated information network based on DTN. D. ACM SIGBED Rev.11, 20–25 (2014)
8. Yang, H.: Porting of RTEMS embedded operating system. J. Softw. 12, 108–113 (2015)
9. Bloom, G., Sherrill, J.: Scheduling and thread management with RTEMS. J. ACM SIGBED Rev.11, 20–25 (2014)
10. Fan, C., Gui, X.: Development of RTEMS real-time system board support package.
J. Appl. Single Chip Microcomput. Embed. Syst.6, 35–38 (2005)
11. Huazhong, W., Chen, H.: Design of Network Driver Based on RTEMS. J. Electron.
World2, 129–130 (2014)
12. ION.pdf.http://ipnsig.org/wp-content/uploads/2015/05/Whats-new-in-ION.pdf
for Solving TSP
Feng Qi and Mengmeng Liu(&) Shandong Normal University, Jinan, China qfsdnu@126.com, 1783797657@qq.com
Abstract. Spiking neural P systems are a class of distributed and parallel computing models that incorporate the idea of spiking neurons into P systems.
Membrane computing (MC) combining with evolutionary computing (EC) is called evolutionary MC. In this work, we will combine SNPS with heuristic algorithm to solve the travelling salesman problem. To this aim, an extended spiking neural P system (ESNPS) has been proposed. A certain number of ESNPS can be organized into OSNPS. Extensive experiments on TSP have been reported to experimentally prove the viability and effectiveness of the proposed neural system.
Keywords: OSNPSGAMembrane algorithmTSP
1 Introduction
Membrane computing is one of the recent branches of natural computing. The obtained models are distributed and parallel computing devices, usually called P systems. There are three main classes of P systems investigated: cell-like P systems [1], tissue-like P systems [2] and neural-like P systems [3]. Spiking Neural P system (SNPS, for short) is a class of neural-like P systems, which are inspired by the method of biological neuron processing information and communicating with others by means of electrical spikes.
Evolutionary computing (EC) is based in Darwin’s theory of evolution, simulating the evolution process and structuring a kind of heuristic optimization algorithms with characteristics of self-organization, adaptive and self-learning, such as genetic algo- rithm, ant colony optimization, particle swarm optimization and so on.
MC combining with EC is called evolutionary membrane computing [4], in which the membrane algorithm is a research direction. Membrane algorithm is a kind of hybrid optimization algorithm which combines the structure of membrane system, evolution rules, calculation mechanism and the principle of evolutionary computation.
The research on the membrane algorithm can be dated back to 2004 and Nishida combined a membrane structure with the way of tabu search to solve the traveling salesman problems [5]. In 2008, a one-level membrane structure combining with a quantum-inspired evolutionary algorithm was put forward to solve knapsack problems [6]. In 2013, a tissue membrane system was used to solve parameter optimization problems [7]. These investigations indicate the feasibility of the P systems for multi- farious optimization problems. But, at present, the membrane algorithm is mainly
©ICST Institute for Computer Sciences, Social Informatics and Telecommunications Engineering 2018 X. Gu et al. (Eds.): MLICOM 2017, Part II, LNICST 227, pp. 668–676, 2018.
https://doi.org/10.1007/978-3-319-73447-7_71
focused on the cell-like P system and tissue-like P system. The research of membrane algorithm on the neural-like P system is relatively few. In 2014, Professor Zhang designed an optimization spiking neural P system [8], which can be used to solve the knapsack problem-a famous NP complete problem. The results show that the design optimization spiking neural P system has obvious advantages in solving knapsack problems.
The Traveling Salesman Problem (TSP) is a widely studied NP-hard combinatorial problem, and it’s famous for being difficult to solve. So, it is meaningful both in theory and applications to develop techniques to solve such problems. In this paper, we combine SNPS and genetic algorithm (GA) to solve the TSP. First, we design the optimization SNPS (OSNPS), achieving the connection between the GA algorithm and the membrane system. Second, we implement our ideas on the platform MATLAB.
The ideas of this article not only contribute to the membrane algorithm of the neural-like P system, but alsofind a new way to solve the TSP.
2 Related Background
Generally, an SNP system is composed of neurons, spikes, synapses, and rules.
Neurons may contain a number of spikes, spiking rules and forgetting rules, and directed connections between neurons and neurons are accomplished by synapses.
A neuron can send information to its neighboring neurons by using the spiking rule. By using the forgetting rule, a number of spikes will be removed from the neuron, and thus they are removed from the system.
An SNP system of degree m1 is a construct of the form:
YẳðO;r1;r2; rm;syn;in;outị
Where
• oẳf ga is the alphabet, a is spike;
• r1;r2; rm are neurons of the formriẳðni;Rịwith 1im. niis a natural number representing the initial number of spikes in neuronri; R is set of rules in each neuron of the following forms:
(a) E=ac!a;d is the spiking rule, where E is the regular expression over af g; (c and d are integer and c1;d0)
(b) as!k is the forgetting rule, with the restriction that for any s1 and any spiking rule E=ac!a;d;as62L Eð ị, where L(E) is set of regular languages associated with regular expression E andk is the empty string;
• synf1;2; :::;mg f1;2;. . .;mgis set of synapses between neurons, where i6ẳj, z6ẳ0 for each ið ị 2;j syn, and for each ið ị 2;j f1;2; :::;mg f1;2; :::;mgthere is at most one synapse ið ị;j in syn.
• In;out2f1;2; :::;mgindicate the input and output neurons respectively.
In SNP systems, spiking rules Eð =ac!a;dị can be applied in any neuron as follows: if neuronri contains k spikes a with ak2L Eð ịand kc , the spiking rule E=ac!a;d is enabled to be applied. By using the rule, c spikes a are consumed, thus kc spikes a remain in the neuronri, and after d time units, one spike a is sent to all neurons rj such that ið ị 2;j syn. For any spiking rule, if Eẳac, the rule is simply written as ac!a;d and if dẳ0, we can omit it, and then the spiking rules can be written as ac!a.
Rules of the form as!k;s1 are forgetting rules with the restriction as62L Eð ị (that is to say, a neuron cannot apply the spiking rules and forgetting rules at the same moment), where L Eð ịis a set of regular languages associated with regular expression E .k is the empty string. If neuron ri contains exactly s spikes, the forgetting rule as!kcan be executed, and then s spikes are removed from the neuron.
The TSP is a class of problem thatfinding a shortest closed tour visiting each city once and only once. Given a set fc1;c2;. . .cng of n cities and symmetric distance d ci;cj
which gives the distance between city ciand cj, the goal is to find a permu- tationpof these n cities that minimizes the following function:
Xn1
iẳ1d c pð ịi;cpðiỵ1ị
ỵd c pð ịn;cpð ị1
ð1ị
3 OSNPS for TSP
3.1 The Structure of OSNPS
The SNPS can be represented graphically. A directed graph is used to represent the structure: the neurons connect with each other by the synapses; the output neuron emits spikes to the environment using outgoing synapse.
Inspired by the fact that spiking neural P system can generate string languages or spike trains [9], an extended spiking neural P system (ESNPS, for short) has been proposed to produce a binary string, and corresponding probability string is used to represent a chromosome. An ESNPS of degree m1 is shown in Fig. 1.
Each ESNPS consists of m neurons. r1;r2;. . .rm are neurons of the form riẳ 1;Ri;Pi
ð ịwith 1im, where Riẳr1i;r2i
(r1i ẳfa!agand r2i ẳfa!kg) is a set of rules and Piẳp1i;p2i
is a set of probabilities, where p1i and p2i are the selection probabilities of rules r1i and r2i respectively, and p1i ỵp2i ẳ1. If the ith neuron spikes, we get its output 1 and probability p1i, otherwise, we get its output 0 and p2i. That is to say we get 1 by probability p1i and we get 0 by probability p2i.
For example, as for an ESNPS of degree m = 5, its probability matrix is shown in below. If we get the spike train [0 0 1 1 0], then the corresponding probability vector is [0.49 0.65 0.42 0.79 0.45].
p1i p2i
0:51 0:35 0:42 0:79 0:55 0:49 0:65 0:58 0:21 0:45
From Fig. 2, we can see that a certain number of ESNPS can be organized into OSNPS by introducing a guider to adjust the selection probabilities and adding a subsystem (rmþ1 and rmþ2) to be the spikes supplier. OSNPS consists of H ESNPS, ESNPS1, ESNPS2…, ESNPSH. Each ESNPS is identical (Fig.1) and the operation steps are illustrated in Subsect.3.2. Thus, each ESNPS outputs a spike train at each moment of time, and then OSNPS will output H binary string, and we can get the corresponding probability matrix.
In the OSNPS, rmỵ1ẳrmỵ2ẳð1;fa!agị;rmỵ1and rmỵ2 spike at each time, send spike to each ESNPS and reload each other continuously. We record the spike train matrix Tt (t is current evolution generation) outputted by OSNPS and the corresponding probability matrix Pt. If we can adjust the probabilities, we can control the outputted matrix. In this paper, we put GA algorithm as the guider algorithm to adjust the probability.
Fig. 1. An example of ESNPS structure
Fig. 2. The structure of OSNPS
We introduce the idea of smallest position value (SPV) [10] method into the genetic algorithm and we give a Table1to explain this encoding and decoding method. We put 2 4 1 3 5 as the city sequence.
The input of the guider is a spike train Ttwith Hm bits. The output of the guider is the rule probability matrix Ptẳ pij
h i
Hm, which is made up of the rule probabilities of H ESNPS. Where pij is the probability of spiking rule or forgetting rule. For example, as for an ESNPS of degree m = 5, one of the vectors mentioned above [0.49 0.65 0.42 0.79 0.45] could be a part of the Pt.
3.2 The Operation Steps 1. Initialize system parameters;
2. Neuronrmþ1 and neuron rmþ2 spike and supply neurons for H ESNPS. At the same time, H ESNPS spike and output spike training matrix Tt(0–1matrix);
3. Put Ttinto the guider and rearrange it as corresponding probability matrix Pt; We put Pt as the initial population of GA and convert Ptto real matrixMtby using the SPV idea;
4. Calculatefitness function;
5. Selection operation: Roulette wheel selection algorithm and optimal individual preservation strategy are used; we select thefirst ten percent of the best individual to save;
6. Cross operation: Adopting OX crossover algorithm;
7. Mutation operation: Using transposition mutation techniques;
8. Judge whether the termination condition (the max generation) is met or not. If it reached, output thefinal result, end; otherwise tẳtỵ1 and go to step 9;
9. We combine the updated probability matrix Ptwith the corresponding 0–1 matrix Tt
to update the probability of each ESNPS and go to step 1;
In the implementation process of OSNPS, each of the neuron in ESNPS according to the rules of probability to spike spiking rules or forgetting rules, which will increase the population diversity (Fig.3).
Table 1. An example of SPV Dimension j Position Pij Sequence
1 0.65 2
2 0.32 4
3 0.87 1
4 0.46 3
5 0.21 5
4 Experimental Results
In this section, our system was implemented using matlab and tested on a personal PC with Pentium IV 3.0 GHz CPU and 512 MB memory. The population size is taken as 30; Crossover probability pcẳ0:8 and the mutation probability pmẳ0:2. The max- imum iteration number N is taken as 500. Since OSNPS mainly uses the combination of SNPS and GA algorithm, we make a contrast experiment between the improved GA algorithm in the guider and the OSNPS system (Figs.4and 5).
Through experiments, we can see that when the number of cities is 30, the results of OSNPS are better than guider algorithm, but OSNPS find the optimum in 402 gen- eration and guider algorithm in 247 generation (Fig.6).
Fig. 3. The systemflow diagram
Fig. 4. 30 cities in guider algorithm
Fig. 5. 30 cities in OSNPS
5 Conclusion
In this paper, we proposed the OSNPS for solving the TSP. The OSNPS achieve the connection between the GA algorithm and the membrane system. Experimental results show that the OSNPS can effectively solve TSP and prevent GA algorithm from falling into local optimum. The ideas of this article not only contribute to the membrane algorithm of the neural-like P system, but alsofind a new way to solve the TSP.
Certainly, the OSNPS has some drawbacks in solving the TSP. When the scale of the problem is getting bigger and bigger, the advantage of OSNPS is increasingly obscure and the system need more time to solve problems than standard GA. So the future work is to improve the SNP system or GA algorithm to optimize experimental results.
Acknowledgment. This work was supported by the Natural Science Foundation of China (No. 61502283). Natural Science Foundation of China (No. 61472231).
References
1. Păun, G.: Computing with membranes. J. Comput. Syst. Sci.61(1), 108–143 (2000) 2. Freund, R., Păun, G., Pérez-Jiménez, M.J.: Tissue-like P systems with channel-states. Theor.
Comput. Sci.330, 101–116 (2005)
3. Ionescu, M., Păun, G., Yokomori, T.: Spiking neural P systems. Fund. Inform.71(2), 279– 308 (2006)
4. Zhang, G., Gheorghe, M., Pan, L., et al.: Evolutionary membrane computing: a comprehensive survey. Inf. Sci.279(1), 528–551 (2014)
Fig. 6. The searching process of OSNPS
5. Nishida, T.Y.: An application of P systems: a new algorithm for NP-complete optimization problems. In: Proceedings of 8th World Multi-Conference Systems, Cybernetics and Informatics, pp. 109–112 (2004)
6. Zhang, G.X., Gheorghe, M., Wu, C.Z.: A quantum-inspired evolutionary algorithm based on P systems for knapsack problem. Fund. Inform.87(1), 93–116 (2008)
7. Zhang, G., Cheng, J., Gheorghe, M., Meng, Q.: A hybrid approach based on differential evolution and tissue membrane systems for solving constrained manufacturing parameter optimization problems. Appl. Soft Comput.13(3), 1528–1542 (2013)
8. Zhang, G., Rong, H., Neri, F., et al.: An optimization spiking neural P system for approximately solving combinatorial optimization problems. Int. J. Neural Syst. 24(05), 1440006 (2014)
9. Reeves, C.R.: A genetic algorithm forflowshop sequencing. Comput. Oper. Res.22(1), 5–13 (1995)
10. Chen, H., Freund, R., Ionescu, M., et al.: On string languages generated by spiking neural P systems. Fund. Inform.75(75), 141–162 (2007)
Author Index
Aijun, Liu II-233 An, Fei I-231
Bao, Dongxing II-326,II-333 Bi, Zongjie II-263
Bu, Xiangyuan II-79
Cai, Gangshan I-423 Cai, Shuhao II-558
Cao, Bei I-535,I-555,II-285 Cao, Lin II-391
Cao, Qiuyi I-516,I-524 Changjun, Yu II-233 Chen, Bing I-271 Chen, Changjun I-453 Chen, Dawei II-43 Chen, Fangni I-103,II-243 Chen, Hao II-437
Chen, Jiamei I-3 Chen, Jiaxin II-546 Chen, Juan I-97,I-592 Chen, Lu I-212 Chen, Qi I-145 Chen, Ruirui II-343 Chen, Ting I-648,I-657 Chen, Xiaolong II-225 Chen, Xifeng I-30 Chen, Xing II-23
Chen, Xinwu II-428,II-612 Chen, Yanping I-239
Chen, Yi-jun II-132,II-169,II-198 Chen, Zhuangguang I-535
Cheng, Chonghu I-453,I-463,I-475,I-484, I-505
Cheng, Fangfang II-634 Cheng, Lele II-8,II-15 Chi, Yonggang I-212 Chong, Kun I-158,I-168 Cong, Haifeng I-133 Cong, Ligang II-498 Cui, Luyao II-206 Cui, Yuwei II-316 Cui, Zihao I-434
Dai, Fusheng I-212,I-231 Dai, Fu-sheng II-254
Dai, Jianxin I-453,I-463,I-475,I-484,I-505 Deng, Yiqiu II-577
Deng, Zhian I-592 Di, Xiaoqiang II-498 Ding, Guoru I-247 Ding, Qun I-626,I-692 Dong, Hang I-22,II-402 Du, Rui I-614
Duan, Shiqi II-569
Fan, Chenyang I-405 Fan, Hongda I-444 Fang, Yuan II-658
Feng, Naizhang I-49,II-569 Feng, Yuan II-254
Fu, Fangfa II-333 Fu, Shiyou II-263 Fu, Ying I-49
Gai, Yingying II-391 Gai, Zhigang II-391,II-482 Gao, Chao I-3
Gao, Xiaozheng II-79 Gao, Yulong I-239 Gao, Yunxue I-205 Gong, Yi-shuai II-198 Gu, Fu-fei II-160 Gu, Xuemai I-545,II-43 Guan, Jian II-225 Guo, Jing I-516 Guo, Qi I-364 Guo, Qing I-564,I-574 Guo, Xiaojuan II-361 Guo, Xiaomin I-300 Guo, Yanqing I-168 Guo, Ying II-53 Guo, Yongan I-300
Han, Mo II-577 Hao, Ganlin II-585 He, Can I-57