The Data Acquisition Backbone Core (DABC)

Design overview

Role and functionality of the objects

  1. All processing code runs in dabc::Modules. There are two general types of dabc::Modules: The dabc::ModuleSync (formerly dabc::ModuleM) and the dabc::ModuleAsync (formerly dabc::ModuleF).
    1. dabc::ModuleSync: Each module is executed by a dedicated working thread. The thread executes a method MainLoop() with arbitrary code, which may block the thread. In blocking calls of the framework (resource or port wait), optionally command callbacks may be executed implicitely ("non strictly blocking mode"). In the "strictly blocking mode", the blocking calls do nothing but wait. A timeout may be set for all blocking calls; this can optionally throw an exception when the time is up. On timeout with exception, either the MainLoop() is left and the exception is then handled in the framework thread; or the MainLoop() itself catches and handles the exception. On state machine commands (e.g. Halt or Suspend), the blocking calls are also left by exception, thus putting the mainloop thread into a stopped state.
    2. dabc::ModuleAsync: Several modules may be run by a shared working thread. The thread processes an event queue and executes appropriate callback functions of the module that is the receiver of the event. Events are fired for data input or output, command execution, and if a requested resource (e.g. memory buffer) is available. The callback functions must never block the working thread. Instead, the callback must be left by return if further processing requires to wait for a requested resource. Thus each callback function must check the available resources explicitly whenever it is entered.
  2. A dabc::Module may register dabc::Commands in the constructor and may define command actions by overwriting a virtual command callback method in a subclass implementation.
  3. A dabc::Module may register dabc::Parameter objects. dabc::Parameters are accessible by name; their values can be monitored and optionally changed by the controls system.
  4. The dabc::Modules are organized and controlled by one dabc::Manager.
    1. The dabc::Manager is an object manager that owns and keeps all registered dabc::Basic objects into a folder structure. Each object has a direct backpointer to the dabc::Manager.
    2. The dabc::Manager has an independent manager thread. This thread may execute manager commands for configuration, and all high priority commands like state machine switching that must be executed immediately even if the mainloop of the modules is blocked.
    3. The dabc::Manager receives and dispatches external module commands to the destination module where they are queued and eventually executed by the module thread (see above).
    4. The dabc::Manager defines interface methods to the control system. This covers registering, sending, and receiving of commands; registering, updating, unregistering of parameters; error logging and global error handling. These methods must be implemented in subclass of dabc::Manager that knows the specific controls framework.
    5. The dabc::Manager is a singleton instance per executable. It is persistent independent of the application's state.
  5. Data in memory is referred by dabc::Buffer objects. Allocated memory areas are kept in dabc::MemoryPool objects.
    1. A dabc::Buffer contains a list of references to scattered memory fragments, or it may have an empty list of references. Auxiliary class dabc::Pointer offers methods to transparently treat the scattered fragments from the user point of view (concept of "virtual contiguous buffer"). Moreover, the user may also get direct access to each of the fragments.
    2. The dabc::Buffers are provided by one or several dabc::MemoryPools which preallocate reasonable memory from the operating system. A dabc::MemoryPool may keep several sets, each set for a different configurable memory size. Each dabc::MemoryPool is identified by a unique dabc::PoolHandle which is used e.g. by the dabc::Module to communicate with the dabc::MemoryPool.
    3. A new dabc::Buffer may be requested from a dabc::MemoryPool by size. Depending on the module type and mode, this request may block until an appropriate dabc::Buffer is available; or it may return an error value if it can not be fulfilled, resp. The delivered dabc::Buffer has at least the requested size, but may be larger. A dabc::Buffer as delivered by the dabc::MemoryPool is contiguos.
    4. Several dabc::Buffers may refer to the same fragment of memory. Therefore, the memory as owned by the dabc::MemoryPool has a reference counter which is incremented for each dabc::Buffer that refers to any of the contained fragments. When a user frees a dabc::Buffer object, the reference counters of the referred memory blocks are decremented. If a reference counter becomes zero, the memory is marked as "free" in the dabc::MemoryPool.
  6. dabc::Buffers are entering and leaving a dabc::Module through dabc::Port objects.
    1. A dabc::Module may have several input ports, output ports, or bidirectional ports. The dabc::Ports are owned by the dabc::Module.
    2. Each dabc::Port uses a dabc::Buffer queue of configurable length.
    3. The dabc::ModuleSync mainloop function may wait for dabc::Buffers to arrive at one or several input dabc::Ports; this call may block. Processed dabc::Buffers can be put into output dabc::Ports; these calls may also block the mainloop (data backpressure mechanism).
    4. The dabc::ModuleAsync has event callbacks that are executed by the framework when a dabc::Buffer arrives at an input dabc::Port (processInput()), or when an output dabc::Port is ready to accept a dabc::Buffer (processOutput()).
  7. Outside the dabc::Modules the dabc::Ports are connected to dabc::Transport objects.
    1. A dabc::Transport may transfer dabc::Buffers between the dabc::Ports of different dabc::Modules (local data transport)
    2. A dabc::Transport may connect the dabc::Port to a data source (file input, network receiver, hardware readout), or to a data sink (file output, network sender).
    3. Two dabc::Transport instances of the same kind on different nodes may connect dabc::Ports of dabc::Modules on different nodes (e.g. verbs transport)
  8. A dabc::Transport belongs to a dabc::Device object of a corresponding type that manages it.
    1. A dabc::Device may have one or several dabc::Transports.
    2. A dabc::Device instance usually corresponds to an IO component (e.g. network card).
    3. There may be more than one dabc::Device instances of the same type in an application scope.
    4. The threads that run the dabc::Transport functionality are created by the dabc::Device. If dabc::Transport implementation shall be able to block (e.g. on socket receive), there can be only one dabc::Transport for this thread.
    5. A dabc::Device may register dabc::Parameters and define dabc::Commands. This is the same functionality as available for dabc::Modules.
    6. A dabc::Device is persistent independent of the connection state of the dabc::Transport. In contrast, a dabc::Transport is created on connect() / open() time and deleted on disconnect() / close().
    7. The dabc::Device objects are owned by the global dabc::Manager singleton. dabc::Transport objects are owned by the dabc::Device.

-- JoernAdamczewski - 05 Jun 2008

User interface

  1. Usual data processing functionality is implemented by subclassing the dabc::ModuleSync base class.
    1. The constructor of dabc::ModuleSync subclass creates all dabc::Ports, may initialize the dabc::PoolHandles, and may define dabc::Commands and dabc::Parameters.
    2. The virtual ExecuteCommand() method of dabc::ModuleSync subclass may implement the callbacks of the defined commands
    3. The virtual MainLoop() method of dabc::ModuleSync subclass implements the processing job.
    4. The user code shall not directly access the dabc::MemoryPools to request new dabc::Buffers. Instead, it can use methods of dabc::Module with the dabc::PoolHandle as argument. These methods may block the MainLoop().
    5. The user code can send and receive dabc::Buffers from and to dabc::Ports by methods of dabc::ModuleSync. These methods may block the MainLoop().
  2. For special set ups (e.g. BNet), the framework implements dabc::Module subclasses with generic functionality (e.g. bnet::BuilderModule, bnet::FilterModule). In this case, the user specific parts like data formats are implemented by subclassing these special dabc::Module classes.
    1. Instead of implementing MainLoop(), other virtual methods (e.g. DoBuildEvent(), TestBuffer()) may be implemented that are implicitely called by the superclass MainLoop().
    2. The special base classes may provide additional methods to be used for data processing.
  3. All data transport functionality is implemented by subclassing dabc::Device and dabc::Transport base classes.
    1. The dabc::Device subclass constructor may initialize dabc::PoolHandles and may define dabc::Commands and dabc::Parameters.
    2. The virtual ExecuteCommand() method of dabc::Device subclass may implement the callbacks of the defined dabc::Commands.
    3. Factory method CreateTransport() of dabc::Device subclass defines which transport instance is created by the framework whenever this device shall be connected to the dabc::Port of a dabc::Module.
    4. The virtual Send() and Recv() methods of the dabc::Transport subclass must implement the actual transport from and to a connected dabc::Port, resp.
    5. To implement a simple user defined transport (e.g. read from a special socket protocol), there is another base class dabc::DataTransport (subclass of dabc::Transport). This class already provides queues for (optionally) input and output buffers and a data backpressure mechanism. Instead of Send and Recv(), the user subclass must implement special virtual methods, e.g. ReadBegin(), ReadComplete(dabc::Buffer*) to request the next dabc::Buffer of a given size from the base class, and to peform filling this buffer from the associated user dabc::Device, resp. These methods are called implicitely from the framework at the right time.
  4. The set up of the dabc::Module and dabc::Device objects is done by dabc::Factory plug-ins.
    1. All dabc::Factories are registered and kept in the global dabc::Manager. The access to the factories' functionality is done via methods of the dabc::Manager.
    2. All dabc::Factories are instantiated as static singletons when loading the libraries that defines them. The user need not create them explicitely.
    3. The dabc::Factories implement methods CreateModule(char*), and CreateDevice(char*) , to produce a special dabc::Module , or dabc::Device subclass by name string, resp. The user must subclass the dabc::Factory to add own classes to the system. Note: the factory method for dabc::Transport objects is not in this factory, but in the corresponding dabc::Device class.
    4. The dabc::Manager can keep more than one dabc::Factory and scans all factories to produce the requested object class.
    5. The framework provides several dabc::Factories for predefined implementations (e.g. bnet::SenderModule, dabc::VerbsDevice).
  5. The specific application controlling code is defined in the dabc::ApplicationPlugin.
    1. The dabc::Manager has exactly one dabc::ApplicationPlugin. The user must define a dabc::ApplicationPlugin subclass.
    2. The dabc::ApplicationPlugin is instantiated and registered on startup time by a global InitPlugins() function. This function is declared and executed in the framework, but defined in the user library and thus knows the user specific plug-in classes. This trick is necessary to decouple the executable main function (e.g. XDAQ executive) from the user specific part.
    3. The user may implement virtual methods UserConfigure() , UserEnable(), UserBeforeStart(), UserAfterStop(), UserHalt() in his/her dabc::ApplicationPlugin subclass. These methods are executed by the framework state machine before or after the corresponding state transitions to do additional application specific configuration, run control, and clean-up actions. Note: all generic state machine actions (e.g. cleanup of dabc::Modules and dabc::Devices, starting and stopping the dabc::WorkingProcessors) are already handled by the framework at the right time and need not to be invoked explicitely here.
    4. The dabc::ApplicationPlugin may register dabc::Parameters that define the application's configuration. These dabc::Parameters can be set at runtime from the configuration and controls system.
    5. The methods of the dabc::ApplicationPlugin should use dabc::ApplicationFactories to create dabc::Modules and dabc::Devices by name string. However, for convenience the user may implement a factory method CreateModule() already in his/her dabc::ApplicationPlugin subclass.
    6. For special DAQ topologies (e.g. Bnet), the framework offers implementations of the dabc::ApplicationPlugin containing the generic functionality (e.g. bnet::WorkerPlugin, bnet::ClusterPlugin). In this case, the user specific parts are implemented by subclassing these and implementing additional virtual methods (e.g. CreateReadout()).

-- JoernAdamczewski - 05 Jun 2008

Controls and configuration

  1. The running state of the DAQ system is ruled by a finite state machine on each node of the cluster
    1. The dabc::Manager provides an interface to switch the application state by the external control system. This may be done by calling state change methods of the dabc::Manager, or by submitting state change commands to the dabc::Manager.
    2. The finite state machine itself is not part of the dabc::Manager, i.e. the dabc::Manager does not necessarily check if a state transition is allowed. Instead, the controls framework provides a state machine implementation.
    3. Some of the application states may be propagated to the active components (dabc::Modules, dabc::Device objects), e.g. the Running or Ready state which correspond to the activity of the thread. Other states like Halted or Failure do not match a component state; e.g. in Halted state, all dabc::Modules are deleted and thus do not have an internal state. The granularity of the control system state machine is not finer than the node application.
    4. There are 5 generic states to treat all set-ups:
      Halted
      The application is not configured and not running. There are no dabc::Modules, dabc::Transports, and dabc::Devices existing.
      Configured
      The application is mostly configured, but not running. dabc::Modules and dabc::Devices are created. Local dabc::Port connections are done. Remote dabc::Transport connections may be not all fully connected, since some connections require active negotiations between existing dabc::Modules. Thus, the final connecting is done between Configured and Ready.
      Ready
      The application is fully configured, but not running (threads are stopped).
      Running
      The application is fully configured and running.
      Failure
      This state is reached when there is an error in a state transition function. Note that a run error during the Running state would not lead to Failure, but rather to stop the run in a usual way (to Ready).
    5. The state transitions between the 5 generic states correspond to commands of the control system for each node application:
      Configure
      between Halted and Configured. Core framework automatically creates standard dabc::Modules and dabc::Devices. The dabc::ApplicationPlugin creates application specific objects, requests dabc::MemoryPools, and establishes all local dabc::Port connections. Core framework instantiates the requested dabc::MemoryPools
      Enable
      between Configured and Ready. The dabc::ApplicationPlugin may establish the necessary connections between remote dabc::Ports. The framework checks if all required connections are ready.
      Start
      between Ready and Running. The framework automatically starts all dabc::Modules, dabc::Transport and dabc::Device actions.
      Stop
      between Running and Ready. The framework automaticall stops all dabc::Modules, dabc::Transport and dabc::Device actions, i.e. the code is suspended to wait at the next appropriate waiting point (e.g. begin of MainLoop(), wait for a requested resource). Note: queued dabc::Buffers are not flushed or discarded on Stop !
      Halt
      switches states Ready , Running , Configured, or Failure to Halted. The framework automatically deletes all registered objects (dabc::Transport, dabc::Device, dabc::Module) in the correct order. However, the user may explicitly specify on creation time that an object shall be persistent (e.g. a dabc::Device may be kept until the end of the process once it had been created)
  2. The control system may send (user defined) dabc::Commands to each component (dabc::Module , dabc::Device). Execution of these commands is independent of the state machine transitions.
  3. The Configuration is done using dabc::Parameter objects. The dabc::Manager provides an interface to register dabc::Parameters to the configuration/control system.
    1. On application startup time, the external configuration system may set the dabc::Parameters from a database, or from a configuration file (e.g. XDAQ xml configuration).
    2. During the application lifetime, the control system may change values of the dabc::Parameters by command. However, since the set up is changed on Configure time only, it may be forbidden to change true configuration parameters except when the application is Halted. Otherwise, there would be the possibility of a mismatch between the monitored parameter values and the really running set up.
    3. The current dabc::Parameters may be stored back to the data base of the configuration system.
  4. The control system may use local dabc::Parameter objects for Monitoring the components. When monitoring dabc::Parameters change, the control system is updated by interface methods of the dabc::Manager and may refresh the GUI representation.
  5. The control system may change local dabc::Parameter objects by command in any state to modify minor system properties independent of the configuration set up (e.g. switching on debug output, change details of processing parameters).

-- JoernAdamczewski - 05 Jun 2008

Package and library organisation

The complete system consists of different packages. Each package may be represented by a subproject of the source code with own namespace. There may be one or more shared libraries for each package. Main packages are as follows:
  1. The Core system: Defines all base classes and interfaces, like dabc::Module, dabc::Port, dabc::Buffer, dabc::Device, dabc::Transport, dabc::Command, dabc::Parameter, dabc::Manager. Basic functionality for object organization, memory management, thread control, event communication.
  2. The Control system: Depends on the Core system. Defines functionality of state machine, command transport, parameter monitoring and modification. Interface to the Core system is implemented by subclass of dabc::Manager.
  3. The Configuration System: Depends on the Core system; it may use functionalities of the Control System. Implements the connection of configuration parameters with a database (i.e. a file in trivial case). Interface to the Core system is defined by subclass of dabc::Manager.
  4. One or more special Transport packages: Depends on the Core system. It may depend on an external framework. Implements dabc::Device and dabc::Transport classes for specific data transfer mechanism, e.g. verbs or tcp/ip socket. Each Transport package provides a dabc::Factory to create a specific dabc::Device by class name. However, the most common transport implementations may be put directly to the Core system, e.g. local memory transport, or Socket; the corresponding dabc::Factory is part of the Core system then.
  5. The Bnet package: Depends on the Core system. Implements special dabc::Modules to cover a generic event builder network. Defines interfaces (virtual methods) of the special bnet::Modules to implement user specific code in subclasses. The Bnet package provides a dabc::Factory to create specific bnet::Modules by class name. It also provides dabc::ApplicationPlugins to define generic functionalities for worker nodes (bnet::WorkerPlugin) and controller nodes (bnet::ControllerPlugin).
  6. The Application packages: Depend on the Core system; may depend on several Transport packages; may depend on the Bnet package; may depend on other ApplicationPackages. Implement experiment specific dabc::Modules that actually run the DAQ. May implement dabc::Device and dabc::Transport classes for special data input or output. May provide a dabc::Factory to create a specific dabc::Module or dabc::Device by class name. Provides implementation of dabc::ApplicationPlugin that defines the application set-up; this may be a subclass of specific existing dabc::ApplicationPlugins (e.g. subclass of bnet::WorkerPlugin).

The DABC distribution will provide the Core, Control, and Configuration system packages; it will provide the Bnet package and Transport packages for verbs and socket.

As examples, there may be several Application packages, with configuration data for different cases:
  • simple Mbs event builder (one node, one or more Mbs input channels)
  • Mbs n \times m event building (using the Bnet package classes)
  • simple Active Buffer Board readout (this example also contains a separate Transport package implementing the ABB Device)
  • A Bnet set up for Active Buffer Board readout (with ABB Transport package)

-- JoernAdamczewski - 05 Jun 2008

Implementation status

Packages

  1. Core system: Plain C++ code; independent of any external framework.
  2. Control system: (see more details here)
    1. XDAQ provides the application environment and the state machine. There may be a second non-XDAQ state machine environment in the future.
    2. DIM is used as main transport layer for commands and parameter monitoring. On top of DIM, a generic record format for parameters is defined.
    3. Each registered command exports a self describing command descriptor parameter as DIM service.
    4. A generic controls GUI using the record and command descriptors is implemented with JAVA.
    5. Since all Parameters are also exported to XDAQ infospace, the XDAQ web interface and SOAP messaging may be used as an alternative controls path for monitoring and setting parameters, or for switching the state machine.
  3. Configuration system:
    1. The XDAQ infospace mechanism is used to set the configuration parameters from an XDAQ xml file on startup time. ToDo: store active configuration back to xml.
    2. By means of XDAQ-DIM interface, configuration parameters are also available as DIM services to the controls system
    3. ToDo: connection to a real configuration database
  4. Transport packages:
    1. Network transport for tcp/ip socket and InfiniBand verbs
    2. Active Buffer Board (ABB) readout
    3. ROC readout using Knut library
  5. Bnet package: ready
  6. Application packages:
    1. Simple Mbs event building
    2. Bnet with switched Mbs event building
    3. Bnet with pseudo event generators and optional ABB readout

-- JoernAdamczewski - 05 Jun 2008

Classes

Core system
The most important classes of the dabc core system See UML class Diagram of the current core.

$ dabc::Basic : The base class for all objects to be kept in dabc collection classes (e.g. dabc::Folder). $ dabc::Command : Represents a command object. A dabc::Command is identified by its name which it keeps as text string. Additionally, a dabc::Command object may contain any number of parameters (integer, double, text). These can be set and requested at the dabc::Command by their names. The available parameters of a special command may be exported to the control system as dabc::CommandDefinition. A dabc::Command is send from a dabc::CommandClient to a dabc::CommandReceiver that executes it in his scope. The result of the command execution may be returned as a reply event to the dabc::CommandClient. The dabc::Manager is the usual dabc::CommandClient that distributes the dabc::Commands to the dabc::CommandReceivers (i.e. dabc::Module , dabc::Manager, or dabc::Device). $ dabc::Parameter : Parameter object that may be monitored or changed from control system. Any dabc::WorkingProcessor implementation may register own dabc::Parameters. The use case of a dabc::Parameter depends on the lifetime of its parent object: The configuration parameters should be created from the dabc::ApplicationPlugin which is persistent during the process lifetime. Transient monitoring parameters may be created from a dabc::Device (optionally also persistent, but not yet existing at process startup when configuration database is read!), from a dabc::Module, or from a dabc::Transport implementation (if this also inherits from dabc::WorkingProcessor). The latter dabc::Parameters do only exist if the application is at least in Configured state, and they will be destroyed with their parents when switching to Halted state. Currently supported parameter types are:
    • dabc::IntParameter - simple integer value
    • dabc::DoubleParameter - simple double value
    • dabc::StrParameter - simple test string
    • dabc::StateParameter - contains state record, e.g. current state of the finite stat machine and associated colour for gui representation
    • dabc::InfoParameter - contains info record, e.g. system message and associated properties for gui representation
    • dabc::RateParameter - contains data rate record and associated properties for gui representation. May be updated in predefined time intervals.
    • dabc::HistogramParameter - contains histogram record and associated properties for gui representation. $ dabc::WorkingThread : An object of this class represents a system thread. The dabc::WorkingThread may execute one or several jobs; each job is defined by an instance of dabc::WorkingProcessor. The dabc::WorkingThread waits on an event queue (by means of pthread condition) until an event for any associated dabc::WorkingProcessor is received; then the corresponding event action is executed by calling ProcessEvent() of the corresponding dabc::WorkingProcessor. $ dabc::WorkingProcessor : Represents a runnable job. Each dabc::WorkingProcessor is assigned to one dabc::WorkingThread instance; this thread may run either one dabc::WorkingProcessor, or serve several dabc::WorkingProcessors in parallel. Subclass of dabc::CommandReceiver, i.e. a dabc::WorkingProcessor may receive and execute dabc::Commands in his scope. $ dabc::Module : A processing unit for one "step" of the dataflow. Subclass of dabc::WorkingProcessor, i.e. the module may be run by an own dedicated thread, or a dabc::WorkingThread may execute several modules that are assigned to it. A dabc::Module has dabc::Ports as connectors for the incoming and outgoing data flow. $ dabc::ModuleSync : Subclass of dabc::Module; defines interface for a synchronous module that is allowed to block. User must implement virtual method MainLoop() that runs in a dedicated dabc::WorkingThread. Method TakeBuffer() provides blocking access to a dabc::MemoryPool. The optionally blocking methods Send(dabc::Port*, dabc::Buffer*) and Receive(dabc::Port*, dabc::Buffer*) are used from the MainLoop() code to send (or receive, resp.) dabc::Buffers over (or from, resp.) a dabc::Port of the dabc::ModuleSync. $ dabc::ModuleAsync : Subclass of dabc::Module; defines interface for an asynchronous module that must never block the execution. Several dabc::ModuleAsyncs may be assigned to one dabc::WorkingThread. User must either re-implement virtual method ProcessUserEvent() wich is called whenever any event for this module (i.e. this dabc::WorkingProcessor) is processed by the dabc::WorkingThread. Or the user may implement callbacks for special events (e.g. ProcessInputEvent(), ProcessOutputEvent(), ProcessPoolEvent(),..) that are invoked when the corresponding event is processed by the dabc::WorkingThread. The events are dispatched to these callbacks by the ProcessUserEvent() default implementation then. To avoid blocking the shared dabc::WorkingThread, the user must always check if a resource (e.g. a dabc::Port, a dabc::MemoryPool) would block before any calls (e.g. Send(), TakeBuffer()) are invoked on it. All callbacks must be left by return in the "I would block" case; on the next time the callback is executed by the framework the user must check the situation again and react appropriately (might require own bookkeeping of available resources). $ dabc::Port : A connection interface between dabc::Module and dabc::Transport. From inside the module scope, only the ports are visible to send or receive dabc:Buffers by reference. Data connections between dabc::Modules (i.e. dabc::Transports between the dabc::Ports of the modules) are set up by the dabc::ApplicationPlugin by methods of dabc::Manager specifying the full module/port names. For ports on different nodes, dabc::Commands to establish a connection may be send remotely (via controls layer, e.g. DIM) and handled by the dabc::Manager of each node. $ dabc::Transport : Moves dabc::Buffers by reference between two dabc::Ports, or between a dabc::Port and the dabc::Device which owns the dabc::Transport, resp. Implementation may be subclass of dabc::WorkingProcessor. $ dabc::Device : Subclass of dabc::WorkingProcessor. Represents generally an module-external data producing or -consuming entity, e.g. a network connection. Each dabc::Device may create and manage several dabc::Transport objects. The dabc::Transport and dabc::Device base classes have various implementations:
    • dabc::LocalTransport and dabc::LocalDevice for memory transport within one process
    • dabc::SocketTransport and dabc::SocketDevice for tcp/ip sockets
    • dabc::VerbsTransport and dabc::VerbsDevice for InfiniBand verbs connection
    • dabc::PCITransport and dabc::PCIBoardDevice for DMA I/O from PCI or PCIe boards $ dabc::Manager : Subclass of dabc::WorkingProcessor (dabc::CommandReceiver) and dabc::CommandClient. The dabc::Manager is one single instance per process; it combines different roles:
    1. It is a manager of all dabc::Basic objects in the process scope. Objects (e.g. dabc::Modules, dabc::Devices, dabc::Parameters) are kept in a folder structure and identified by full path name.
    2. It defines the interface to the controls system (state machine, remote command communication, parameter export); this is to be implemented in a subclass. The dabc::Manager handles the command and parameter flow locally and remotely: commands submitted to the local dabc::Manager are directed to the dabc::CommandReceiver where they shall be executed. If any dabc:Parameter is changed, this is recognized by the dabc::Manager and optionally forwarded to the associated controls system. Current implementations of dabc::Manager are:
      • dabc::StandaloneManager provides simple socket controls connection to send remote commands within a multi node cluster. This is used for the standalone B-net examples without higher level control system.
      • dabc::xd::Manager for use in the XDAQ framework. Provides DIM as transport layer for inter-module commands. Additionally, dabc::Parameters may be registered and updated automatically as DIM services. This manager is used in the XDAQ bnet example from the dabc::xd::Node. More details of this implementation are described here.
    3. It provides interfaces for user specific plug-ins that define the actual set-up: several dabc::Factories to create objects, and one dabc::ApplicationPlugin to define the state machine transition actions. $ dabc::Factory : Factory plug-in for creation of dabc::Modules and dabc::Devices $ dabc::ApplicationPlugin : Subclass of dabc::WorkingProcessor (dabc::CommandReceiver). Interface for the user specific code. Defines the actions on transitions of the finite state machine of the dabc::Manager. May export dabc::Parameters for configuration, and may define additional dabc::Commands.

-- JoernAdamczewski - 05 Jun 2008

Bnet classes
The classes of the Bnet package, providing functionalities of the event builder network.

See UML class Diagram of the bnet:: classes with examples.

$ bnet::ClusterPlugin : Subclass of dabc::ApplicationPlugin to run on the cluster controller node of the builder network.
    1. It implements the master state machine of the bnet. The controlling GUI usually sends state machine commmands to the controller node only; the bnet::ClusterPlugin works as a command fan-out and state observer of all worker nodes.
    2. It controls the traffic scheduling of the data packets between the worker nodes by means of a data flow controller (class bnet::GlobalDFCModule) . This controller module communicates with the bnet::SenderModules on each worker to let them send their packets synchronized with all other workers.
    3. It may handle failures on the worker nodes automatically, e.g. by reconfiguring the data scheduling paths between the workers. $ bnet::WorkerPlugin : Subclass of dabc::ApplicationPlugin to run on the worker nodes of the builder network.
    4. Implements the local state machine for each worker with respect to the bnet functionality.
    5. It registers dabc::Parameters to configure the node in the bnet, and methods to set and check these parameters.
    6. Defines factory methods CreateReadout(), CreateCombiner(), CreateBuilder(), CreateFilter(), CreateStorage() to be implemented in user specific subclass. These methods are used in the worker state machine of the bnet framework. $ bnet::GeneratorModule : Subclass of dabc::ModuleSync. Framework class to fill a dabc::Buffer from the assigned dabc::MemoryPool with generated (i.e. simulated) data.
    7. Method GeneratePacket(dabc::Buffer*) is to be implemented in user defined subclass (e.g. bnet::MbsGeneratorModule) and is called frequently in module's MainLoop().
    8. Each filled buffer is forwarded to the single output dabc::Port of the module. $ bnet::FormaterModule : Subclass of dabc::ModuleSync. Framework class to format inputs from several readouts to one data frame (e.g. combine an event from subevent readouts on that node).
    9. It provides dabc::MemoryPools and one input dabc::Port for each readout connection (e.g. bnet::GeneratorModule, or connection to a readout dabc::Transport, resp.).
    10. The formatting functionality is to be implemented in method MainLoop() of user defined subclass (e.g. bnet::MbsCombinerModule). $ bnet::SenderModule : Subclass of dabc::ModuleAsync. Responsible for sending the event data frames to the receiver nodes, according to the network traffic schedule as set by the bnet::ClusterPlugin.
    11. It has one input dabc::Port that gets the event packets (or time sorted frames, resp.) from the preceding bnet::FormatterModule (or a special event combiner module, resp.). The input data frames are buffered in the bnet::SenderModule and analyzed which frame is to be sent to what receiver node at what time. This can be done in a non-synchronized "round-robin" fashion, or time-synchronized after a global traffic schedule as evaluated by the bnet::ClusterPlugin.
    12. Each receiver node is represented by one output dabc::Port of the bnet::SenderModule that is connected via a network transport (tcp socket, InfiniBand verbs) to an input dabc::Port of the corresponding bnet::ReceiverNode. $ bnet::ReceiverModule : Subclass of dabc::ModuleAsync. Receives the data frames from the bnet::SenderModules and combines corresponding event packets (or time frames, resp.) of the different senders.
    13. It has one input dabc::Port for each sender node in the bnet. The data frames are buffered in the dabc::ReceiverModule until the corresponding frames of all senders have been received; then the combined total frame is send to the output dabc::Port.
    14. It has exactly one output dabc::Port. This is connected to the bnet::BuilderModule implementation (or a user defined builder module, resp.) that performs the actual event tagging and "first level filtering" due to the experimental physics. $ bnet::BuilderModule : Subclass of dabc::ModuleSync. Framework class to select and build a physics event from the data frames of all bnet senders as received by the bnet::ReceiverModule.
    15. It has one input dabc::Port connected to the bnet::ReceiverModule. The data frame dabc::Buffers of all bnet senders are transferred serially over this port and are then kept as an internal std::vector in the bnet::BuilderModule.
    16. Method DoBuildEvent() is to be implemented in user defined subclass (e.g. bnet::MbsBuilderModule) and is called in module's MainLoop() when a set of corresponding buffers is complete.
    17. It provides one output dabc:Port that may connect to a bnet::FilterModule, or a user defined output or storage module, resp.
    18. Currently the user has to implement the sending of the tagged events to the output dabc::Port explicitly in his subclass. Should be done automatically in MainLoop() later? $ bnet::FilterModule : Subclass of dabc::ModuleSync. Framework class to filter out the incoming physics events according to the experiment's "trigger conditions".
    19. Has one input dabc::Port to get dabc::Buffers with already tagged physics events from the preceding bnet::BuilderModule.
    20. Has one output dabc:Port to connect a user defined output or storage module, resp.
    21. Method TestBuffer(dabc::Buffer*) is to be implemented in user defined subclass (e.g. bnet::MbsFilterModule) and is called in module's MainLoop() for each incoming dabc::Buffer. Method should return true if the event is "good" for further processing.
    22. Forwards the tested "good" dabc::Buffers to the output dabc::Port and discards all other buffers.

-- JoernAdamczewski - 27 Jun 2008
Topic revision: r18 - 2008-08-13, LinevSergey - This page was cached on 2024-12-20 - 07:10.

This site is powered by FoswikiCopyright © by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding GSI Wiki? Send feedback | Legal notice | Privacy Policy (german)