News

  • Beta Update (Bug Fix CubeInsertionBoundaries)
    May 5th, 2017
    Read More …
  • Beta Update (New feature added)
    March 29th, 2017
    Read More …
  • Mercury @ Physics@Veldhoven
    January 18th, 2017
    Read More …

Archive

RSS Feed

Code Overview

WHY A NEW SIMULATION CODE?
There are many open-source particle simulation packages, so the question does arise of why another? The originally concept was to develop a code that could be used alongside a continuum solver. We are currently actively working on coupling with the Universities Manchester continuum solver Oomph-lib. The aim was that the coupled code could be used to approach problems using various multi-scale computational methods. Additionally, also at the University of Twente, a novel contact detection method, the hierarchical grid, had been developed . This algorithm is quicker than existing methods for poly-dispersed flows and still the same speed for mono-dispersed. So the idea of a new simulation code that had three core design aims was born:
  1. It should be easy to use with minimal C++ knowledge.
  2. It should be built around the new hierarchical grid detection method;
  3. It should be able to generate accurate continuum fields that could be used with/along side continuum solvers.
FEATURES
Since it was first started it has evolved and gained many novel features. The main features include:
  • The hierarchical grid: The neighbourhood search algorithm to effectively compute interaction forces, even for highly poly-dispersed particles.
  • Built-in coarse-graining statistical package: It has an in-built advanced statistics package to extract continuum fields such as density, velocity, structure and stress tensors, either during the computation or as a post-processing step. [Not included in the current beta]
  • Access to continuum fields in real time: Continuum field can be evaluated at run-time, which means it can respond to its current macroscopic state. An illustrative example of using this would be a pressure-release wall, i.e., a wall whose motion is determined by the macroscopic pressure created via by particle collisions and moves such that its pressure (not position) is controlled.
  • Contact laws for granular materials: Many granular contact force models are implemented, including elastic (linear or Hertzian), plastic, cohesive, temperature/pressure/time-dependent (sintering) temperature/pressure/time-dependent, and frictional (sliding/rolling/torsion) forces.
  • Simple C++ implementation: MercuryDPM consists of a series of C++ classes that are flexible, but easy to use. This allows the user to generate advanced applications with only a few lines of code.
  • Handlers: The code has handlers for particles, walls and boundaries. Thus, each object type has a common interface, even though individual objects can have completely different properties. It also makes it easier for the user to create new objects.
  • Complex walls: The code not only supports simple flat walls, but also axially symmetric, polyhedral and helical screw walls are all available by default. Additionally, due to the handler interface it is easy for more advanced users to define new types of walls themselves.
  • Specialised classes: Many specialised classes exist that reduce the amount of code required by the user to develop standard geometries and applications. Examples include chute flows, vertically vibrated walls and rotating drums.
  • Species: Particles and walls each have a unique species, which is hidden for basic use of the code; however, this feature can be enabled by a single function call. Different particle properties for each species and different interaction forces for each pair of species can then be defined, allowing the simulation of mixtures.
  • Self-test suite and demos: MercuryDPM comes with a large number of (over 100) self-tests and demo codes. These serve two purposes: 1) they allow us to constantly test both new and old features so we can keep bugs to a minimum; 2) secondly, they serve as good example, for new users, of how to perform different tasks.
  • Simple restarting: Every time a code is run, and at intervals during the computation, restart files are generated. Codes can be restarted without recompilation simply by calling the executable again with the restart file name as an argument. Also the restart files are complete in the sense they contain all the information about the problem. Therefore small changes can be made (e.g. the individual particle density or coefficient of restitution) and the simulation can be rerun without the need for recompilation of the code.
  • Visualisation: The particles output can be visualised easily using the free package VMD (visual molecular dynamics, http://www.ks.uiuc.edu/Research/vmd/) as well as the in-house visualisation tool xballs.
  • Parallel: There is currently a parallel-distributed version of the code under development using MPI and this version should be publicly available shortly.