Journal of Unmanned System Technology


 A  Swarm Simulator

John R. Page and Haoyang Cheng

School of Mechanical and Manufacturing Engineering, University of New South Wales



Abstract Cooperative teams of Unmanned Aerial Vehicles (UAVs) have applications for a number of civil and military missions. The cooperative control of a group of UAVs is a complex problem that is dominated by uncertainty, limited information, and task constraints. Centralized, hierarchical and decentralized, decision and control algorithms have been developed to address this complexity. In order to investigate potential cooperative UAV control algorithms, a multi-vehicle simulator, called the cluster simulator was developed.The cluster simulator also has the flexibility to simulate other distributed logic systems such as power networks, land vehicles etc. though we are only just starting to investigate these applications.

Keywords Human-machine interface, Simulation, Swarm, UAVs.


Autonomous swarm technology, which has its origin in the behaviour of communal insects, has found an increasing number of applications within engineering. While a number of researchers have investigated how animals, with very limited cognitive ability, when combined in groups could perform complex tasks, Reynolds made a major breakthrough, from the flying vehicle perspective, when he demonstrated with a computer program that only three rules were required to cause the flocking of birds or the swarming of fish [1]. Those legendary rules are:

1.       Move close to other flock mates (Attraction)

2.       Don’t get too close (Repulsion)

3.       Maintain a similar flight velocity as those close (Alignment)

Real flocks of birds or shoals of fish have additional rules but these three are sufficient to generate complex behaviour with very simple computing. A significant research effort has been invested in recent years into the design and simulation of intelligent swarm systems [2]. Examples from social insects, such as foraging and the division of labour show that self-organized systems (SOSs) can generate useful emergent behaviours at the system level. Self-organised, swarm based systems do not require a centralised plan or deliberate global related actions from individuals. This has the potential to reduce mission planning and therefore the amount of intelligence required in control systems design.  Evolution has followed this track because treating a system as a number of independent elements provides a robust method of dealing with complex problems that are dominated by uncertainty, limited information, and task constraints. Such systems often present a much better outcome than the traditional hieratical approach. This is shown by the response to massive power failure that took place in the United States of America and Canada on August 14th, 2003, which demonstrated the weakness of trying to control chaotic systems using central, hierarchal, control.

After power was restored from the worst power failure in North American history an attempt was made to isolate the cause. The initiation of the fault was found to be vegetation striking a power line between the Harding substation south of Cleveland, Ohio and the Chamberlin substation twelve miles away. Conventional wisdom dictated that if all such lines were kept clear of vegetation, in the future all would be well. The problem was, however, not the component failure, but the system: as it had grown in an organic manner, it had become a classic deterministically chaotic system. It is now realised that ensuring no wires would ever fail due to vegetation contact in the future, were that possible, would no more likely to stop a repeat of the event than killing the legionary butterfly, or all butterflies in the Amazon rain forest would make Britain safe from hurricanes [3]. Fortunately wiser heads prevailed and the complex system was reformatted as a complex swarm and no significant events have occurred since.


There are a number of advantages to using a fleet of UAVs to carry out some missions despite the increase in complexity. They allow for much wider spread of sensors. This has major advantages when the area to be investigated is large or the event being investigated is dynamic and real time data is required quickly such as with a tsunami. They reduce the risk of mission failure due to a single vehicle failure, caused by system or vehicle design, operation or by external act such as hostile action. This ruggedness is reduced, however, when all vehicles are identical and thus subject to possible generic failures. Finally there can be cost advantages in building a number of less sophisticated vehicles rather than one more complex one. The system itself can also be easily, and relatively cheaply, expanded or contracted by simply adding or removing vehicles. There are a number of ways of controlling fleets of UAVs, normally referred to as swarms, ranging from complete central control to full autonomy.

A.     Central Hieratical Control of Swarms of UAVs

In this method of control the action of each individual within the swarm is managed by a central control system. The degree of control can vary according to the needs of the mission and external geo-political constraints. To date all vehicles used in combat, such as Predator Unmanned Aerial Vehicle, have been remote controlled and not UAVs in the true sense of the definition i.e. significant autonomy.

These vehicles are operated by a remote pilot located in a central control system who has the same authority over the vehicles behaviour as a traditional pilot would have over a manned vehicle. Currently, the human factor associated with the UAV operators’ workload is one of the key limitations to increasing future UAS effectiveness [4]While in general these “drones”, appear, to operate alone rather than in swarms they could easily be formed up in a swarm if that was felt to be a precursor to a successful mission. Apart from being controlled by remote pilot, the centralized controllers have been developed to achieve cooperation within the group. Schumacher, et al. developed a centralized task assignment algorithm, using a mixed integer linear program formulation. This algorithm can be used to assign multiple tasks, which involves applying timing and task order constrains, to the vehicles in an optimal manner [5]-[7]. Only for small sized scenarios with a few vehicles and targets can a solution be found in sufficient time using such methods. For large sized scenarios, Shima proposed a method using a generic algorithm to solve this task assignment problem [8].

There are two major weaknesses in using this method of control for swarms. In order for the remote controller to have complete situational awareness the vehicle has to transmit a large amount of data to the central control. This rapidly becomes a design driver for the unmanned aerial system (UAS). It is possible to reduce the need to transmit vast amounts of data by giving more autonomy to the vehicle and this is in fact done. The vehicle providing all the autopilot functions while the central control provides the flight director role. For military vehicles that may have to operate in an environment with large amounts of electromagnetic interference and possibly subject to electronic attack the weakness, associated with data transmission, can be significant.  The other major weakness, of centralised control, is the successful operation depends of the availability and integrity of the control centre which may be vulnerable. If the function of the control centre is compromised either through natural disaster or hostile action the swarm becomes non-viable. Central control does, however, provide the best possibility of optimising the use of assets which gives it an advantage in procurement, but it is more costly to operate which mitigates against this.

B.     Distributed Control of Swarms of UAVs

Rather than have a central single control it is possible to set up a series of nodes with a determined degree of autonomy from the central control. This has the advantage of moving the control closer to the “action”, which has advantages in both disaster recovery scenarios and military campaigns in that those requiring to deploy the assets have direct control of them. The devolved control centre can be fixed or mobile. One scenario is for a number of UAVs to be controlled by an airborne control station. This offers the possibility of having one strike fighter supported by a number of unmanned vehicles which offers greater area coverage, shielding and larger more flexible weapon capability.  The advantages of the arrangement over the straight central control are that far less communication is required and there is not one highly vulnerable centre. As those controlling the swarm are involved more directly in the mission less situational awareness information needs to be transmitted, and this can often be supported, as in the airborne system, with direct observation. While the remote control centre may be even more vulnerable its loss only degrades rather than destroys the system as a whole. It is not as efficient an asset user in financial terms as central control and often is more costly to operate due to duplication and the need to support and protect the controllers in often vulnerable locations.

C.     Autonomous Swarms

It is autonomous swarms or Self-Organized (SO) systems that are currently the subject of much interest and research. A SO system, or swarm, is typically a decentralized control system made up of autonomous agents that are distributed in the environment and follow stimulus response behaviors [9]Each vehicle, or agent, has a set of rules that governs its behaviour in relationship to the other agents and the environment. Even though these rules are often very simple they can lead to very complex group behaviour. Termites for example construct cities complete with environmental control systems with no overall concept and based on the individual termites simple rule set. In the same way UAVs governed by a very simple rule set can exhibit complex behaviour and if the rule set is correctly defined successfully complete the mission. A swarm approach to unmanned system control has been explored for a number of military and civilian applications. Gaudiano, et al. in their studies, tried to apply quantitative methodologies to evaluate performance of UAV swarms under a variety of conditions [10]In Price’s research, ten self-organization rules were implemented whose weight factors were collected into a single fitness function. This function was further refined using a genetic algorithm within the simulation [11]. Another widely adopted mechanism is digital pheromone maps that imitate the foraging behavior of ants. Digital pheromones are modeled on the pheromone fields generated by individual vehicles. By synchronizing their maps the UAVs coordinate to avoid redundant searches [12]Hauert, et al. has investigated the potential to use a swarm of UAVs to establish a wireless communication network [13]. They also applied artificial evolution to develop neuronal controllers for the swarm of homogenous agents.

The exact nature of the emergent behaviour is, however, impossible to predict, and it is one of the challenges faced by those working in this area. There are two distinct types of self-organising swarms homogeneous and heterogeneous. Homogeneous swarms consist of identical vehicles with the same operational characteristics while heterogeneous swarms are composed of vehicles with different characteristics. For example a homogeneous military ground attack swarm might consist of vehicles that are able to both track and deliver weapons. On the other hand a heterogeneous ground attack swarm might consist of some vehicles specialising in tracking while others would deploy the weapon system. Self-organising swarms can never use the assets as efficiently as centrally controlled systems but they do have two advantages they require much less control data transmission and the system is very rugged. As they only need to communicate with at most the other members of the swarm their communication tracks are short and can be low power. When an individual vehicle is lost by accident or hostile action the swarm re-organises rather like ants when some of their fellows are squashed. As there is no central control a military swarm can in effect become a “doomsday” machine. That is to say once deployed by a nation destruction of the nation’s infrastructure or command centres makes it harder rather than easier to disable. The main problem with this type of system is that as the emergent behaviour is unpredictable the only way to investigate it is through simulation. Agent-based simulations of complex adaptive systems are becoming an increasing popular tool in the artificial life community. The application of agent-based simulations in combat modelling had been explored by Ilachinski [14]. He argued that agent-based models are most useful when they are applied to complex systems that can be neither wholly described nor built by conventional models based on differential equations. Macal and North discussed applications for agent based simulation and addressed toolkits and methods for developing agent-based models [15]. Compared with previous agent-based simulations, our design has high fidelity agent dynamics model and realistic communication channel. This allows further studies into more realistic scenarios with UAS operating under limited communication.


In order to investigate swarm behaviour we started design and built a swarm simulator, under a military research budget, in 2006. The aim of the machine was to investigate swarm behaviour under central, distributed and autonomous control and it has provided sterling service to our research and understanding. This flexibility has been retained in the present machine that was developed from the original retaining the ability to carry out comparative studies.

Figure 1 The new Swarm Simulator at UNSW

It was necessary to build a new machine due to the improvements in the computing capacity of relatively cheap PCs. The swarm in its basic form consists of eight computers each running Flight Gear a freeware open access program, which comes with sufficiently accurate flight dynamic models. Each vehicle, or machine, is responsible for simulating its own flight dynamics and graphical output, collecting internal state data and communicating its state data to other machines in the network. This configuration makes the cluster simulator capable of connecting real hardware to the system, or simply replaces the specific machine to create a hardware-in-the-loop simulation.

This decentralized system setup possesses multiple processing units which each operate independently of the others. The control algorithms are implemented by using the dedicated script language comes as part of the FlightGear project. The script contains the control algorithm or behaviour set placed inside the agent model directory. Thus each agent has its unique behaviours. The decentralized system can be highly scalable, since the flight dynamics model for each agent is running on an independent machine. Extra computers can be added to the system if it is desired to have a larger swarm but eight has been found to be the minimum number for realistic swarm simulation. It is also possible to replace any one of the computers with a link to our twin jet trainer flight simulator to allow the “crewman” in the rear cockpit to act as a human in the loop distributed control system.

simtect_page_2010 005.jpg

Figure 2 The Twin Seat Trainer Simulator

We also substituted one of the simulated vehicles with one of the university’s real aircraft on the previous simulator but this has not yet been attempted with the new simulator. The eight platforms for the flight simulation program are linked to the server via local area network. Each UAV is able to multicast its own location on to the local UAV network. Other UAVs can receive this data and use it to make local decisions. The human-machine interface consists of two machines: one machine is used as the fleet monitor which extracts data from the server and displays flight trajectories on a Google Map application; the other is used as a central control when such systems are being investigated, it acts when the swarm is in self-organizing mode only to launch the system and provide initial instructions.



Figure 3 Basic layout of the Swarm Simulator


While data can of course be extracted in a numerical form, the flight trajectories are also displayed on a Google Earth Map. This gives an easy to comprehend impression of what is occurring and also makes the system interactively attractive.

Map Output.jpg

Figure 4 The swarm of UAVs generated by the simulator displayed on Google


In order to prove the operation of the remodelled simulator three scenarios have been considered that while far simpler than the machines  capability  have allowed some confidence to be gained in its design and construction.

A.     Follow the leader

In this simple scenario one of the vehicles is defined as the leader and the remaining vehicles are programed to follow. To avoid any risk of collision the vehicles are programed to maintain a safe clearance distance from each other. They are also programed to adjust their speed, so the further they are away from the leader; the faster they fly in order to retain a compact formation. The flight profile of the leading vehicle can be adjusted either automatically or via a human input. Classical chases type patterns can be easily generated by having the lead vehicle fly some course such as a sinusoid track. As the followers flight profile is controlled by a semi-realistic flight simulation program their ability to track depends on the flight characteristics of the selected aircraft. This means we can easily change the behaviour of the swarm as a whole by modifying the flight dynamics either of the whole swarm or some individuals. This capability will assist us when we start to examine the behaviour of the heterogeneous and homogeneous swarms we are currently investigating.

B.     Convoy

In this scenario the vehicles form a protective shield round a selected vehicle. This has a great deal of similarity with case A the major difference is that a slightly more complex rule set is required. Some of the vehicles in this scenario are also in front and beside the target vehicle. When the vehicles are in a tight formation they try to match their speed to the closest vehicles but if they spread too far ahead or behind they adjust their speed accordingly. Now when the target vehicle manoeuvres or is manoeuvred by a controller the other vehicles have to respond so not to be run into each other. This is complicated by the fact that if the target vehicle turns right, the vehicles on the right side have to move to avoid an impact while those on the left have to move to keep the formation compact. Again it is possible to modify the behaviour of the vehicles both individually and as a swarm.

C.     Waypoints

In this scenario each vehicle is sent a set of waypoints to fly to. The speed of each vehicle is set by how far it is away from the nearest vehicle to the waypoint. Once one vehicle has reached a waypoint all the vehicles turn towards the next waypoint until they have all been visited by one vehicle. Again the characteristics of the vehicles can be varied and the time taken to complete the mission used for comparison. The problem is more complex that it appears as the location of the waypoints, compactness of formation and flight characteristics, for example turn rate, all effect the time to complete the mission.

These scenarios are of course very simple and have been designed purely to determine whether the simulator is able to support our research efforts. They do cover a full range of behaviours but appear show the swarm simulator has the capability and flexibility we require


The remodelled swarm simulator has now been commissioned and is starting to be committed to research work.  We will be mainly looking at the search element of a search and rescue mission. One of the other areas we are expanding into is using the simulator to model an autonomous swarm approach to cogeneration of electrical power.

One of the major design features that has met with some controversy is the using a cluster of computers each representing a separate agent rather than one processor. The advantage of this approach is that it guarantees separation of processing reducing the risk of contamination. It also means that an individual agent can be developed and tested prior to its insertion in the cluster allowing more efficient use of the resource.

To date we are very pleased with the re-built cluster it seems to have retained the flexibility of its predecessor while providing the advantages associated with up to date computing power and all at an acceptable cost.

In the future studies, we will use this simulator for two research topics. The first one is to investigate the use of multiple vehicles in the search and rescue scenarios under limited communication and sensing constraints. The second is to develop Human-Machine-Interface (HMI) with which single human operator could supervise and control a swarm of UAS.


[1]     C. W. Reynolds, "Flocks, herds and schools: A distributed behavioral model," in the 14th annual conference on Computer graphics and interactive techniques, SIGGRAPH, 1987, pp. 25-34.

[2]     E. Bonabeau, et al., Swarm Intelligence-From Natural to Artificial Systems, Oxford University Press, 1999.

[3]     E. Lorenz, The Essence Of Chaos, CRC Press, 1995.

[4]     K. Heffner and F. Hassaine, "Towards Intelligent Operator Interfaces in Support of Autonomous UVS Operations," in 16th International Command and Control Research and Technology Symposium, Quebuc, Canada, 2011.

[5]     C. Schumacher, et al., "UAV task assignment with timing constraints via mixedinteger," in AIAA 3rd Unmanned Unlimited Systems Conference, 2004.

[6]     C. Schumacher, et al., "Task allocation for wide area search munitions with variable path length," Proceedings of American Control Conference, 2003, pp. 3472-3477 vol.4.

[7]     C. Schumacher, et al., "Task allocation for wide area search munitions," Proceedings of American Control Conference, 2002, pp. 1917-1922 vol.3.

[8]     T. Shima and C. Schumacher, "Assigning cooperating UAVs to simultaneous tasks on consecutive targets using genetic algorithms," Journal of the Operational Research Society, 9 July 2008 2008. CrossRef

[9]     S. Garnier, et al., "The biological principles of swarm intelligence," Swarm Intelligence, vol. 1, pp. 3-31, 2007. CrossRef  

[10]  P. Gaudiano, et al., "Control of UAV Swarms: What The Bugs Can Teach Us," presented at the 2nd AIAA "Unmanned Unlimited" Systems, Technologies, and Operations, San Diego, California, 2003.

[11]  I. C. Price, "Evolving self-organized behavior for homogeneous and heterogeneous UAV or UCAV swarms," Master of Science, Department of Electrical and Computer Engineering, Air Force Institute of Technology, 2006.

[12]  C. A. Erignac, "An Exhaustive Swarming Search Strategy based on Distributed Pheromone Maps," presented at the AIAA infotech@Aerospace 2007 Conference and Exhibit, Rohnert Park, California, 2007.

[13]  S. Hauert, et al., "Evolved swarming without positioning information: an application in aerial communication relay," Autonomous Robots, vol. 26, p. 11, 2009. CrossRef

[14]  A. Ilachinski, Artificial War: Multiagent-Based Simulation of Combat, World Scientific Publishing Company, 2004. CrossRef

[15]  C. M. Macal and M. J. North, "Agent-based modeling and simulation," Proceedings of  Simulation Conference (WSC), 2009, pp. 86-98.


Call for Paper 2014/2015

The editor of J Unmanned Sys Tech (ISSN 2287-7320) is extending an invitation for authors to submit their work to be considered for publication with the journal. The current Call for Paper is applicable for the issue in the Q4 of 2014 or in 2015.


OJS Login

To login into the system, please use the login interface in the new open journal system.