emergent 8.2.2 Network Class Reference
emergent 8.2.2   Home · Wiki Docs For: Network · Emergent Help Browser 

Network Class Reference

A network, containing layers, units, etc..

See for more info: Wiki Docs For: Network

 #include <Network>

defined at: /mnt/ssd/grey/local/include/Emergent/Network.h :253-1078

Inherits From: taFBase, taNBase, taOBase, taBase

Inherited By: Network*, Network&, const Network, BpNetwork, LeabraNetwork

Index: SubTypes, Members, Methods, Static Members, Static Methods, Expert Members, Expert Methods

Sub Types


SubType Documentation

enum Network::WtSaveFormat

ConstantValueDescription
Network::TEXT0x00000000weights are saved as ascii text representation of digits (completely portable)
Network::BINARY0x00000001weights are written directly to the file in binary format (no loss in accuracy and more space efficient, but possibly non-portable)
Network::NET_FMT0x00000002use format specified on the network

enum Network::AutoBuildMode

ConstantValueDescription
Network::AUTO_BUILD0x00000000automatically build the network after loading
Network::PROMPT_BUILD0x00000001prompt about building after loading (if run in -nogui mode, it is automatically built without prompting)
Network::NO_BUILD0x00000002do not build network after loading

enum Network::AutoLoadMode

ConstantValueDescription
Network::NO_AUTO_LOAD0x00000000Do not automatically load a weights file
Network::AUTO_LOAD_WTS_00x00000001Automatically load weights from the first weights saved weights entry -- that should have the save_with_proj or auto_load flags set
Network::AUTO_LOAD_FILE0x00000002Automatically load a weights file named in auto_load_file after loading the project.

enum Network::NetTextLoc

ConstantValueDescription
Network::NT_BOTTOM0x00000000standard bottom location below network -- extends network 'foot' lower below to make text more visible
Network::NT_TOP_BACK0x00000001above top-most layer, at the back of the network depth-wise -- foot is raised as when no net text is visible
Network::NT_LEFT_BACK0x00000002at left of network, at the back of the network depth-wise -- foot is raised as when no net text is visible
Network::NT_RIGHT_BACK0x00000003at right of network, at the back of the network depth-wise -- foot is raised as when no net text is visible
Network::NT_LEFT_MID0x00000004at left of network, in the middle depth-wise -- foot is raised as when no net text is visible
Network::NT_RIGHT_MID0x00000005at right of network, in the middle depth-wise -- foot is raised as when no net text is visible

enum Network::NetFlags

flags for network

ConstantValueDescription
Network::NF_NONE0x00000000
Network::NETIN_PER_PRJN0x00000001compute netinput per projection instead of a single aggregate value across all inputs (which is the default)
Network::BUILD_INIT_WTS0x00000002initialize the weights after building the network -- for very large networks, may want to turn this off to save some redundant time
Network::INIT_WTS_1_THREAD0x00000004use only one (main) thread to initialize weights -- this ensures that runs with different numbers of threads have the same initial weights, but is slower
Network::SAVE_KILLED_WTS0x00000008if the project is killed while running in a non-interactive mode (e.g., on cluster), save this network's weights (only if network is built and epoch > 0)
Network::BUILT0x00001000is the network built -- all memory allocated, etc
Network::INTACT0x00002000if the network is built, is it also still intact, with all the current params set as they were when it was built?
Network::BUILT_INTACT0x00003000built and intact

enum Network::WtUpdate

ConstantValueDescription
Network::ON_LINE0x00000000update weights on-line (after every event) -- this is not viable for dmem processing across trials and is automatically switched to small_batch in that case
Network::SMALL_BATCH0x00000001update weights every small_batch_n trials
Network::BATCH0x00000002update weights in batch mode (after every epoch)

enum Network::TrainMode

ConstantValueDescription
Network::TEST0x00000000network is only being tested; no learning should occur
Network::TRAIN0x00000001network is being trained: learning should occur

enum Network::StateLayerSpecTypes

manual type registry system for all spec types used in state code -- any new spec type MUST be added to this list, extending from N case in the last list, for any derived classes, and each spec must return its appropriate enum in GetStateSpecType() method

ConstantValueDescription
Network::T_LayerSpec0x00000000base LayerSpec type
Network::N_NetworkLayerSpecs0x00000001derived classes start from this one -- use class name for subclasses

enum Network::StatePrjnSpecTypes

manual type registry system for all spec types used in state code -- any new spec type MUST be added to this list, extending from N case in the last list, for any derived classes, and each spec must return its appropriate enum in GetStateSpecType() method

ConstantValueDescription
Network::T_ProjectionSpec0x00000000base PrjnSpec type
Network::T_FullPrjnSpec0x00000001
Network::T_OneToOnePrjnSpec0x00000002
Network::T_GpOneToOnePrjnSpec0x00000003
Network::T_MarkerGpOneToOnePrjnSpec0x00000004
Network::T_GpMapConvergePrjnSpec0x00000005
Network::T_GpMapDivergePrjnSpec0x00000006
Network::T_RandomPrjnSpec0x00000007
Network::T_UniformRndPrjnSpec0x00000008
Network::T_PolarRndPrjnSpec0x00000009
Network::T_SymmetricPrjnSpec0x0000000A
Network::T_TesselPrjnSpec0x0000000B
Network::T_GpTesselPrjnSpec0x0000000C
Network::T_TiledGpRFPrjnSpec0x0000000D
Network::T_TiledGpRFOneToOnePrjnSpec0x0000000E
Network::T_TiledGpRFOneToOneWtsPrjnSpec0x0000000F
Network::T_TiledSubGpRFPrjnSpec0x00000010
Network::T_TiledRFPrjnSpec0x00000011
Network::T_TiledNovlpPrjnSpec0x00000012
Network::T_TiledGpMapConvergePrjnSpec0x00000013
Network::T_TiledDivGpRFPrjnSpec0x00000014
Network::T_GaussRFPrjnSpec0x00000015
Network::T_GradientWtsPrjnSpec0x00000016
Network::T_PFCPrjnSpec0x00000017
Network::T_BgPfcPrjnSpec0x00000018
Network::T_ConPoolPrjnSpec0x00000019not converted
Network::T_SmallWorldPrjnSpec0x0000001Anot converted
Network::T_ScalarValSelfPrjnSpec0x0000001Bnot converted
Network::T_SaliencyPrjnSpec0x0000001Cnot converted
Network::T_CerebConj2PrjnSpec0x0000001Dnot converted
Network::T_TopoWtsPrjnSpec0x0000001Enot converted
Network::N_NetworkPrjnSpecs0x0000001Fderived classes start from this one -- use class name for subclasses

enum Network::StateUnitSpecTypes

manual type registry system for all spec types used in state code -- any new spec type MUST be added to this list, extending from N case in the last list, for any derived classes, and each spec must return its appropriate enum in GetStateSpecType() method

ConstantValueDescription
Network::T_UnitSpec0x00000000base UnitSpec type
Network::N_NetworkUnitSpecs0x00000001derived classes start from this one -- use class name for subclasses

enum Network::StateConSpecTypes

manual type registry system for all spec types used in state code -- any new spec type MUST be added to this list, extending from N case in the last list, for any derived classes, and each spec must return its appropriate enum in GetStateSpecType() method

ConstantValueDescription
Network::T_ConSpec0x00000000base ConSpec type
Network::N_NetworkConSpecs0x00000001derived classes start from this one -- use class name for subclasses


Regular (preferred) Member and Method Documentation

Members

Member Category: CUDA

Member Category: Counter

Member Category: File

Member Category: Learning

Member Category: State

Member Category: Statistic

Member Category: Structure

Member Category: _NoCategory

Member Category: taBase

Methods

Method Category: Activation

Method Category: CUDA

Method Category: DMem

Method Category: Display

Method Category: File

Method Category: Learning

Method Category: ObjectMgmt

Method Category: State

Method Category: Statistic

Method Category: Statistics

Method Category: Structure

Method Category: UserData

Method Category: _NoCategory


Member Documentation

Member Category: CUDA

Network::cuda : NetworkCudaSpec

parameters for NVIDA CUDA GPU implementation -- only applicable for CUDA_COMPILE binaries

Member Category: Counter

Network::batch : int

batch counter: number of times network has been trained over a full sequence of epochs (updated by program)

Network::cycle : int

cycle counter: number of iterations of activation updating (settling) on the current external input pattern (updated by program)

Network::epoch : int

epoch counter: number of times a complete set of training patterns has been presented (updated by program)

Network::group : int

group counter: optional extra counter to record sequence-level information (sequence = group of trials)

Network::group_name : taString

name associated with the current group of trials, if such a grouping is applicable (typically set by a LayerWriter)

Network::output_name : taString

name for the output produced by the network (must be computed by a program)

Network::tick : int

tick ..counter: optional extra counter to record a level of organization below the trial level (for cases where trials have multiple component elements)

Network::time : float

the current time, relative to some established starting point, in algorithm-specific units (often miliseconds) -- updated internally by network

Network::total_trials : int

total number of trials counter: number of external input patterns that have been presented since the weights were initialized -- updated internally by network

Network::trial : int

trial counter: number of external input patterns that have been presented in the current epoch (updated by program)

Network::trial_name : taString

name associated with the current trial (e.g., name of input pattern, typically set by a LayerWriter)

Member Category: File

Network::wt_save_fmt : Network::WtSaveFormat

format to save weights in if saving weights

Member Category: Learning

Network::small_batch_n : int

number of events for small_batch learning mode (specifies how often weight changes are synchronized in dmem)

Network::train_mode : Network::TrainMode

training mode -- determines whether weights are updated or not (and other algorithm-dependent differences as well). TEST turns off learning

Network::wt_update : Network::WtUpdate

weight update mode: when are weights updated (only applicable if train_mode = TRAIN)

Member Category: State

Network::n_threads : int

number of CPU threads to use -- defaults to value in preferences, but can be overridden. is copied to net_state->threads.n_threads which is actual thread impl

Member Category: Statistic

Network::avg_sse : Average

average sum squared error over an epoch or similar larger set of external input patterns

Network::cnt_err : float

count of number of times the sum squared error was above cnt_err_tol over an epoch or similar larger set of external input patterns

Network::pct_cor : float

epoch-wise average of count of number of times the sum squared error was below cnt_err_tol over an epoch or similar larger set of external input patterns (= 1 - pct_err -- just for convenience for whichever you want to plot)

Network::pct_err : float

epoch-wise average of count of number of times the sum squared error was above cnt_err_tol over an epoch or similar larger set of external input patterns (= cnt_err / n)

Network::sse : float

sum squared error over the network, for the current external input pattern

Network::stats : NetStatsSpecs

parameters controling the computation of statistics

Network::sum_sse : float

total sum squared error over an epoch or similar larger set of external input patterns

Member Category: Structure

Network::auto_build : Network::AutoBuildMode

whether to automatically build the network (make units and connections) after loading or not

Network::flags : Network::NetFlags

flags controlling various aspects of network function

Network::n_cons : int64_t

total number of connections in the network

Network::n_units : int

total number of units in the network

Member Category: _NoCategory

Network::auto_load_file : taString

file name to auto-load weights file from (any path must be relative to project file)

Network::auto_load_wts : Network::AutoLoadMode

Whether to automatically load a weights file when the Network object is loaded. It is not possible to save the units, so this can be used to provide pre-configured network for the user (must auto_build network first)

Network::brain_atlas : taBrainAtlasRef

The name of the atlas to use for brain view rendering. Labels from this atlas can be applied to layers' brain_area member.

Member Category: taBase

taFBase::desc : taString

description of this object: what does it do, how should it be used, etc

taNBase::name : taString

name of the object


Method Documentation

Method Category: Activation

void Network::Compute_Act ( )

Compute Activation based on net input

Show Source Code

void Network::Compute_Netin ( )

Compute NetInput: weighted activation from other units

Show Source Code

void Network::Compute_NetinAct ( )

compute net input from other units and then our own activation value based on that -- use this for feedforward networks to propagate activation through network in one compute cycle

Show Source Code

void Network::Init_Acts ( )

initialize the unit activation state variables

Show Source Code

void Network::Init_Epoch ( )

Initializes network state at the start of a new epoch -- updates parameters according to param_seq for example

Show Source Code

void Network::Init_InputData ( )

Initializes external and target inputs

Show Source Code

void Network::Init_Sequence ( )

called by NetGroupedDataLoop at the start of a sequence (group) of input data events -- some algorithms may want to have a flag to optionally initialize activations at this point

Show Source Code

bool Network::NetinPerPrjn ( )

is this network configured to compute net input on a per-prjn basis?

Show Source Code

void Network::Send_Netin ( )

sender-based computation of net input: weighted activation from other units

Show Source Code

Method Category: CUDA

void Network::Cuda_ConStateToDevice ( )

send all the connection state variables (weights, dwts, etc) to the GPU device from the host -- this is done automatically after Init_Weights and LoadWeights*

Show Source Code

void Network::Cuda_ConStateToHost ( )

get all the connection state variables (weights, dwts, etc) back from the GPU device to the host -- this is done automatically before SaveWeights*

Show Source Code

taString Network::Cuda_TimingReport ( bool print = true )

report time used statistics for CUDA operations (only does something for cuda compiled version)

Show Source Code

void Network::Cuda_UnitStateToDevice ( )

send all the unit state variables (acts etc) to the GPU device from the host

Show Source Code

void Network::Cuda_UnitStateToHost ( )

get all the unit state variables (acts etc) back from the GPU device to the host

Show Source Code

void Network::Cuda_UpdateSpecs ( )

update all the specs stored in the cuda device, based on current settings -- called automatically after Init_Weights, but needs to be called manually when specs are changed

Show Source Code

Method Category: DMem

void Network::DMemTrialBarrier ( )

block all dmem processors at the trial level until everyone reaches this same point in the program flow -- cordinates all the processors at this point -- important for cases where there are interdependencies among processors, where they need to be coordinated going forward -- does nothing if dmem_nprocs <= 1 or not using dmem

Show Source Code

bool Network::DMem_ConfirmHash ( bool incl_weights = true )

create a unique hash code of the entire state of the network, and confirm that it is the same across all nodes in current DMem group -- triggers an error and returns false if they do not match (null function if not dmem)

Show Source Code

void Network::DMem_ShareTrialData ( DataTable* dt, int n_rows = 1 )

share trial data from given datatable across the trial-level dmem communicator (outer loop) -- each processor gets data from all other processors; if called every trial with n_rows = 1, data will be identical to non-dmem; if called at end of epoch with n_rows = -1 data will be grouped by processor but this is more efficient

Show Source Code

Method Category: Display

void Network::AssignVoxels ( )

assign voxel coordinates to units in the network according to current atlas_name on the Network and brain_area specifications on the Layers

Show Source Code

NetView* Network::FindMakeView ( T3Panel* fr = __null )

find existing or make a new viewer of this network (NULL=use existing empty frame if any, else make new frame)

Show Source Code

NetView* Network::FindView ( )

find (first) existing viewer of this network

Show Source Code

UnitState_cpp* Network::GetViewSrcU ( )

get the currently picked source unit (for viewing weights) from netview

Show Source Code

taString Network::GetViewVar ( )

get the currently viewed variable name from netview

Show Source Code

void Network::HistMovie ( int x_size = 640, int y_size = 480, taString& fname_stub = movie_img_ )

record individual frames of the netview display from current position through to the end of the history buffer, as movie frames -- use mjpeg tools http://mjpeg.sourceforge.net/ (pipe png2yuv into mpeg2enc) to compile the individual PNG frames into an MPEG movie, which can then be transcoded (e.g., using VLC) into any number of other formats

Show Source Code

void Network::NetControlPanel ( ControlPanel* editor, taString& extra_label, taString& sub_gp_nm )

add the key network counters and statistics to a project control panel (if ctrl_panel is NULL, a new one is created in .ctrl_panels). The extra label is prepended to each member name, and if sub_group, then all items are placed in a subgroup with the network's name. NOTE: be sure to click update_after on NetCounterInit and Incr at appropriate program level(s) to trigger updates of select edit display (typically in Train to update epoch -- auto update of all after Step so only needed for continuous update during runnign)

Show Source Code

BrainView* Network::NewBrainView ( T3Panel* fr = __null )

Create an fMRI-style brain visualization to show activations in defined brain areas.

Show Source Code

NetView* Network::NewView ( T3Panel* fr = __null )

make a new viewer of this network (NULL=use existing empty frame if any, else make new frame)

Show Source Code

void Network::PlaceNetText ( Network::NetTextLoc net_text_loc, float scale = 1.0f )

locate the network text data display (counters, statistics -- typically shown at bottom of network) in a new standard location (it can also be dragged anywhere in the net view, turn on lay_mv button and click on red arrow) -- can also change the scaling

Show Source Code

bool Network::SetViewSrcU ( UnitState_cpp* src_u )

set the picked source unit (for viewing weights) in netview

Show Source Code

bool Network::SetViewVar ( taString& view_var )

set the variable name to view in the netview

Show Source Code

Method Category: File

bool Network::LoadFmFirstWeights ( bool quiet = false )

load weight values from first Weights object -- if it does not yet exist, emit an error message -- useful for basic save and load of one cached set of weights, as compared to a situation where you need to manage multiple different weight sets

Show Source Code

bool Network::LoadWeights ( taString& fname, bool quiet = false )

read weight values in from a simple ordered list of weights (fmt is read from file) (leave fname empty to pull up file chooser)

Show Source Code

void Network::SaveToFirstWeights ( )

write weight values out to the first Weights object in the weights list -- if it does not yet exist, then create it -- useful for basic save and load of one cached set of weights, as compared to a situation where you need to manage multiple different weight sets

Show Source Code

void Network::SaveWeights ( taString& fname, Network::WtSaveFormat fmt = NET_FMT )

write weight values out in a simple ordered list of weights (optionally in binary fmt) (leave fname empty to pull up file chooser)

Show Source Code

bool Network::SaveWeights_ClusterRunCmd ( )

check if user has sent a specific command to save weights through jobs_running_cmd.dat file -- called at end of epoch in Compute_EpochStats

Show Source Code

bool Network::SaveWeights_ClusterRunTerm ( )

update cluster run job info and check if it is time to save weights before job terminates -- called in Compute_Weights

Show Source Code

void Network::SaveWeights_Tagged ( )

save weights using standard naming format as generted with the SaveWeights program, based on the tag environment variable

Show Source Code

Method Category: Learning

void Network::AddNoiseToWeights ( Random& noise_spec )

add noise to weights using given noise specification

Show Source Code

void Network::Compute_Weights ( )

update weights for whole net: calls DMem_SumDWts before doing update if in dmem mode

Show Source Code

bool Network::Compute_Weights_Test ( int trial_no )

check to see if it is time to update the weights based on the given number of completed trials (typically trial counter + 1): if ON_LINE, always true; if SMALL_BATCH, only if trial_no % batch_n_eff == 0; if BATCH, never (check at end of epoch and run then)

Show Source Code

void Network::Compute_dWt ( )

compute weight changes -- the essence of learning

Show Source Code

void Network::Init_Weights ( )

Initialize the weights -- also inits acts, counters and stats -- does unit level threaded and then does Layers after

Show Source Code

void Network::Init_Weights_AutoLoad ( )

auto-load weights from Weights object, if it has auto_load set..

Show Source Code

void Network::Init_Weights_post ( )

post-initialize state variables (ie. for scaling symmetrical weights, other wt state keyed off of weights, etc) -- this MUST be called after any external modifications to the weights, e.g., the TransformWeights or AddNoiseToWeights calls on any lower-level objects (layers, units, con groups)

Show Source Code

void Network::Init_dWt ( )

Initialize the weight change variables

Show Source Code

void Network::TransformWeights ( SimpleMathSpec& trans )

apply given transformation to weights

Show Source Code

Method Category: ObjectMgmt

bool Network::ChangeMyType ( TypeDef* new_type )

Change me into a different type of object, copying current info (done through owner)

Show Source Code

void Network::Copy_Weights ( Network* src )

copies weights from other network (incl wts assoc with unit bias member)

Show Source Code

taString taNBase::GetName ( )

Get the name of the object

Show Source Code

bool taNBase::HasName ( )

does the object have a name field that can be set?

Show Source Code

void taNBase::MakeNameUnique ( )

make sure my name is unique relative to names of objects associated with my owner (e.g., if it is a list object), typically because my name has changed, and owner needs to ensure that all names are unique

Show Source Code

void Network::RemoveMonitors ( )

Remove monitoring of all objects in all processes associated with parent project

Show Source Code

bool taNBase::SetName ( taString& nm )

Set the object's name

Show Source Code

void Network::UpdateMonitors ( )

Update monitoring of all objects in all processes associated with parent project

Show Source Code

void Network::UpdtAfterNetMod ( )

update network after any network modification (calls appropriate functions)

Show Source Code

void Network::setStale ( )

set the stale flag indicating a change in object values; gets forwarded up ('true' is implied, only the impl obj defines when it is cleared)

Show Source Code

Method Category: State

ConSpec* Network::ConSpecFromState ( ConSpec_cpp* state )

get con spec corresponding to given state

Show Source Code

LayerState_cpp* Network::GetLayerState ( int lay_idx )

get layer state for given layer index

Show Source Code

PrjnState_cpp* Network::GetPrjnState ( int prjn_idx )

get prjn state for given prjn index

Show Source Code

char* Network::GetStateSuffix ( )

get the suffix string for this state type ('_cpp', '_cuda', or blank for main)

Show Source Code

UnGpState_cpp* Network::GetUnGpState ( int ungp_idx )

get unit group state for given index

Show Source Code

UnitState_cpp* Network::GetUnitState ( int flat_idx )

unit state for given flat_idx

Show Source Code

UnitState_cpp* Network::GetUnitStateFromPath ( taString& path )

get unit state from path (owning layer plus index) -- generated by GetUnitStatePath

Show Source Code

taString Network::GetUnitStatePath ( UnitState_cpp* unit )

get a path to unit state (owning layer plus index) -- can be decoded by GetUnitStateFromPath

Show Source Code

Layer* Network::LayerFromState ( LayerState_cpp* state )

get layer corresponding to given layer state

Show Source Code

bool Network::LayerInRange ( int lay_idx, bool err_msg = true )

test if layer number is in range

Show Source Code

Projection* Network::PrjnFromState ( PrjnState_cpp* state )

get projection corresponding to given prjn state

Show Source Code

bool Network::PrjnInRange ( int prjn_idx, bool err_msg = true )

test if prjn number is in range

Show Source Code

ConState_cpp* Network::RecvConState ( int flat_idx, int recv_idx )

recv ConState for given flat unit index and recv group index number

Show Source Code

ConState_cpp* Network::RecvConStateSafe ( int flat_idx, int recv_idx )

recv ConState for given flat unit index and recv group index number

Show Source Code

ConState_cpp* Network::SendConState ( int flat_idx, int send_idx )

send ConState for given flat unit index and send index number

Show Source Code

ConState_cpp* Network::SendConStateSafe ( int flat_idx, int send_idx )

send ConState for given flat unit index and send index number

Show Source Code

ConSpec* Network::StateConSpec ( int idx )

con_spec at given index in list of state objects built in network state for running

Show Source Code

Layer* Network::StateLayer ( int idx )

layer at given index in list of state objects built in network state for running

Show Source Code

LayerSpec* Network::StateLayerSpec ( int idx )

layer_spec at given index in list of state objects built in network state for running

Show Source Code

Projection* Network::StatePrjn ( int idx )

prjn at given index in list of state objects built in network state for running

Show Source Code

ProjectionSpec* Network::StatePrjnSpec ( int idx )

prjn_spec at given index in list of state objects built in network state for running

Show Source Code

UnitSpec* Network::StateUnitSpec ( int idx )

unit_spec at given index in list of state objects built in network state for running

Show Source Code

void Network::SyncAllState ( )

synchronize all state -- net and layer

Show Source Code

void Network::SyncLayerState ( )

synchronize all layer main state with LayerState computational state objects -- each variable is either on one side or the other, and sync copies in proper direction

Show Source Code

void Network::SyncLayerState_Layer ( Layer* lay )

synchronize one layer main state with LayerState computational state object -- each variable is either on one side or the other, and sync copies in proper direction

Show Source Code

void Network::SyncNetState ( )

synchronize our main state with NetworkState computational state objects -- each variable is either on one side or the other, and sync copies in proper direction

Show Source Code

void Network::SyncPrjnState ( )

synchronize all prjn main state with PrjnState computational state objects -- each variable is either on one side or the other, and sync copies in proper direction

Show Source Code

void Network::SyncPrjnState_Prjn ( Projection* lay )

synchronize one projection main state with PrjnState computational state object -- each variable is either on one side or the other, and sync copies in proper direction

Show Source Code

bool Network::ThrInRange ( int thr_no, bool err_msg = true )

test if thread number is in range

Show Source Code

int Network::ThrLayUnEnd ( int thr_no, int lay_no )

ending thread-specific unit index for given layer (from state_layers list) -- this is like the max in a for loop -- valid indexes are < end

Show Source Code

int Network::ThrLayUnStart ( int thr_no, int lay_no )

starting thread-specific unit index for given layer (from state_layers list)

Show Source Code

int Network::ThrNRecvConGps ( int thr_no )

number of recv connection groups as a flat list across all units processed by given thread

Show Source Code

int Network::ThrNSendConGps ( int thr_no )

number of send connection groups as a flat list across all units processed by given thread

Show Source Code

int Network::ThrNUnits ( int thr_no )

number of units processed by given thread

Show Source Code

ConState_cpp* Network::ThrRecvConState ( int thr_no, int thr_cgp_idx )

recv ConState for given thread, thread-specific con-group index

Show Source Code

ConState_cpp* Network::ThrSendConState ( int thr_no, int thr_cgp_idx )

send ConState for given thread, thread-specific con-group index

Show Source Code

float* Network::ThrSendNetinTmp ( int thr_no )

temporary sending netinput memory for given thread -- no NETIN_PER_PRJN version

Show Source Code

float* Network::ThrSendNetinTmpPerPrjn ( int thr_no, int recv_idx )

temporary sending netinput memory for given thread -- NETIN_PER_PRJN version

Show Source Code

int Network::ThrUnGpUnEnd ( int thr_no, int lay_no )

ending thread-specific unit index for given unit group -- this is like the max in a for loop -- valid indexes are < end

Show Source Code

int Network::ThrUnGpUnStart ( int thr_no, int lay_no )

starting thread-specific unit index for given unit group

Show Source Code

bool Network::ThrUnIdxInRange ( int thr_no, int thr_un_idx, bool err_msg = true )

test if thread-based unit index is in range

Show Source Code

int Network::ThrUnNRecvConGps ( int thr_no, int thr_un_idx )

number of recv connection groups for given unit within thread-specific memory at given thread number and thread-specific unit index

Show Source Code

int Network::ThrUnNRecvConGpsSafe ( int thr_no, int thr_un_idx )

number of recv connection groups for given unit within thread-specific memory at given thread number and thread-specific unit index

Show Source Code

int Network::ThrUnNSendConGps ( int thr_no, int thr_un_idx )

number of send connection groups for given unit within thread-specific memory at given thread number and thread-specific unit index

Show Source Code

int Network::ThrUnNSendConGpsSafe ( int thr_no, int thr_un_idx )

number of send connection groups for given unit within thread-specific memory at given thread number and thread-specific unit index

Show Source Code

bool Network::ThrUnRecvConGpInRange ( int thr_no, int thr_un_idx, int recv_idx, bool err_msg = true )

test if thread-specified unit recv con group index is in range

Show Source Code

ConState_cpp* Network::ThrUnRecvConState ( int thr_no, int thr_un_idx, int recv_idx )

recv ConState for given thread, thread-specific unit index, and recv group index

Show Source Code

ConState_cpp* Network::ThrUnRecvConStateSafe ( int thr_no, int thr_un_idx, int recv_idx )

recv ConState for given thread, thread-specific unit index, and recv group index

Show Source Code

bool Network::ThrUnSendConGpInRange ( int thr_no, int thr_un_idx, int send_idx, bool err_msg = true )

test if thread-specified unit send con group index is in range

Show Source Code

ConState_cpp* Network::ThrUnSendConState ( int thr_no, int thr_un_idx, int send_idx )

send ConState for given thread, thread-specific unit index, and send group index

Show Source Code

ConState_cpp* Network::ThrUnSendConStateSafe ( int thr_no, int thr_un_idx, int send_idx )

send ConState for given thread, thread-specific unit index, and send group index

Show Source Code

int Network::ThrUnitIdx ( int thr_no, int thr_un_idx )

flat_idx of unit at given thread, thread-specific unit index (max ThrNUnits()-1)

Show Source Code

UnitState_cpp* Network::ThrUnitState ( int thr_no, int thr_un_idx )

unit state for unit at given thread, thread-specific unit index (max ThrNUnits()-1)

Show Source Code

bool Network::UnFlatIdxInRange ( int flat_idx, bool err_msg = true )

test if unit flat index is in range

Show Source Code

bool Network::UnGpInRange ( int ungp_idx, bool err_msg = true )

test if ungp number is in range

Show Source Code

int Network::UnNRecvConGps ( int flat_idx )

number of recv connection groups for given unit at flat_idx

Show Source Code

int Network::UnNRecvConGpsSafe ( int flat_idx )

number of recv connection groups for given unit at flat_idx

Show Source Code

int Network::UnNSendConGps ( int flat_idx )

number of send connection groups for given unit at flat_idx

Show Source Code

int Network::UnNSendConGpsSafe ( int flat_idx )

number of send connection groups for given unit at flat_idx

Show Source Code

bool Network::UnRecvConGpInRange ( int flat_idx, int recv_idx, bool err_msg = true )

test if unit recv con group index is in range

Show Source Code

bool Network::UnSendConGpInRange ( int flat_idx, int send_idx, bool err_msg = true )

test if unit send con group index is in range

Show Source Code

int Network::UnThr ( int flat_idx )

thread that owns and processes the given unit (flat_idx)

Show Source Code

int Network::UnThrUnIdx ( int flat_idx )

index in thread-specific memory where that unit lives for given unit (flat_idx)

Show Source Code

UnitSpec* Network::UnitSpecFromState ( UnitSpec_cpp* state )

get unit spec corresponding to given state

Show Source Code

void Network::UpdateAllStateConSpecs ( )

update all the State-side specs based on current settings in main specs

Show Source Code

void Network::UpdateAllStateLayerSpecs ( )

update all the State-side specs based on current settings in main specs

Show Source Code

void Network::UpdateAllStatePrjnSpecs ( )

update all the State-side specs based on current settings in main specs

Show Source Code

void Network::UpdateAllStateSpecs ( )

update all the State-side specs based on current settings in main specs

Show Source Code

void Network::UpdateAllStateUnitSpecs ( )

update all the State-side specs based on current settings in main specs

Show Source Code

Method Category: Statistic

void Network::Compute_EpochPRerr ( )

compute epoch-level precision and recall statistics

Show Source Code

void Network::Compute_EpochSSE ( )

compute epoch-level sum squared error and related statistics

Show Source Code

void Network::Compute_EpochStats ( )

compute epoch-level statistics; calls DMem_ComputeAggs (if dmem) and EpochSSE -- specific algos may add more

Show Source Code

void Network::Compute_PRerr ( )

compute precision and recall error statistics over entire network -- true positive, false positive, and false negative -- precision = tp / (tp + fp) recall = tp / (tp + fn) fmeasure = 2 * p * r / (p + r), specificity, fall-out, mcc.

Show Source Code

void Network::Compute_SSE ( bool unit_avg = false, bool sqrt = false )

compute sum squared error of activations vs targets over the entire network -- optionally taking the average over units, and square root of the final results

Show Source Code

void Network::Compute_TrialStats ( )

compute trial-level statistics (SSE and others defined by specific algorithms)

Show Source Code

void Network::Init_Metrics ( )

this is an omnibus guy that initializes every metric: Counters, Stats, and Timers

Show Source Code

taString Network::MemoryReport ( bool print = true )

report about memory allocation for the network

Show Source Code

void Network::MonitorVar ( NetMonitor* net_mon, taString& variable )

monitor (record in a datatable) the given variable on this network

Show Source Code

void Network::ProjectUnitWeights ( UnitState_cpp* un, int top_k_un = 5, int top_k_gp = 1, bool swt = false, bool zero_sub_hiddens = false )

project given unit's weights (receiving unless swt = true) through all layers (without any loops) -- results stored in wt_prjn on each unit (tmp_calc1 is used as a sum variable). top_k_un (< 1 = all) is number of strongest units to allow to pass information further down the chain -- lower numbers generally make the info more interpretable. top_k_gp is number of unit groups to process for filtering through, if layer has sub groups (< 1 = ignore subgroups). values are always normalized at each layer to prevent exponential decrease/increase effects, so results are only relative indications of influence -- if zero_sub_hiddens then intermediate hidden units (indicated by layer_type == HIDDEN) that have sub-threshold values zeroed

Show Source Code

bool Network::SnapAnd ( taString& variable )

do an AND-like MIN computation of the current snap unit variable and the current value of the specified variable (or currently selected variable in netview if empty or using from the gui) -- shows the intersection between current state and previously snap'd state

Show Source Code

bool Network::SnapOr ( taString& variable )

do an OR-like MAX computation of the current snap unit variable and the current value of the specified variable (or currently selected variable in netview if empty or using from the gui) -- shows the union between current state and previously snap'd state

Show Source Code

bool Network::SnapThresh ( float thresh_val = 0.5f, taString& variable )

take a snapshot of specified variable (or currently selected variable if empty) in netview -- copies this value to the snap unit variable, but also applies a thresholding such that values above the thresh_val are set to 1 and values below the thresh_val are set to 0

Show Source Code

bool Network::SnapVar ( taString& variable )

take a snapshot of specified variable (or currently selected variable in netview if empty or using from the gui) -- copies this value to the snap unit variable

Show Source Code

bool Network::Snapshot ( taString& variable, SimpleMathSpec& math_op, bool arg_is_snap = true )

take a snapshot of given variable (if empty, currently viewed variable in netview is used): assign snap value on unit to given variable value, optionally using simple math operation on that value. if arg_is_snap is true, then the 'arg' argument to the math operation is the current value of the snap variable. for example, to compute intersection of variable with snap value, use MIN and arg_is_snap.

Show Source Code

Method Category: Statistics

DataTable* Network::ConVarsToTable ( DataTable* dt, taString& var1, taString& var2, taString& var3, taString& var4, taString& var5, taString& var6, taString& var7, taString& var8, taString& var9, taString& var10, taString& var11, taString& var12, taString& var13, taString& var14 )

record given connection-level variable to data table with column names the same as the variable names, and one row per *connection* (unlike monitor-based operations which create matrix columns) -- this is useful for performing analyses on learning rules as a function of sending and receiving unit variables -- uses receiver-based connection traversal -- connection variables are just specified directly by name -- corresponding receiver unit variables are 'r.var' and sending unit variables are 's.var'

Show Source Code

Method Category: Structure

bool Network::AutoBuild ( )

called by ProjectBase::AutoBuildNets() -- does auto-building and loading of weight files after project is loaded

Show Source Code

void Network::Build ( )

Build the network units and Connect them (calls CheckSpecs/BuildLayers/Units/Prjns and Connect)

Show Source Code

bool Network::CheckBuild ( bool quiet = false )

check if network units are built

Show Source Code

void Network::CheckSpecs ( )

check to make sure that specs are not null and set to the right type, and update with new specs etc to fix any errors (with notify), so that at least network operations will not crash -- called in Build and CheckConfig

Show Source Code

bool Network::ComputeHash ( bool incl_weights = true )

create a unique hash code of the entire state of the network, including all indexes, sizes, connectivity, and optionally the weight values -- used for testing identicality of different networks, e.g., across DMem / mpi nodes

Show Source Code

void Network::Compute_LayerDistances ( )

compute distances between layers and input/output layers

Show Source Code

void Network::Compute_PrjnDirections ( )

compute the directions of projections based on the relative distances from input/output layers (calls Compute_LayerDistances first)

Show Source Code

void Network::DeIconifyAllLayers ( )

de-iconify all of the layers in the network (turns off ICONIFIED flag, makes them show up in the network display)

Show Source Code

Layer* Network::FindLayer ( taString& nm )

find layer by name

Show Source Code

Layer_Group* Network::FindLayerGroup ( taString& nm )

find a given layer group -- only searches in top-level layer groups

Show Source Code

Layer* Network::FindMakeLayer ( taString& nm, TypeDef* td = __null, bool& nw_itm = nw_itm_def_arg, taString& alt_nm )

find a given layer and if not found, make it (of default type if NULL) (if nm is not found and alt_nm != NULL, it is searched for)

Show Source Code

Layer_Group* Network::FindMakeLayerGroup ( taString& nm, TypeDef* td = __null, bool& nw_itm = nw_itm_def_arg, taString& alt_nm )

find a given layer group and if not found, make it (of default type if NULL) (if nm is not found and alt_nm != NULL, it is searched for)

Show Source Code

Projection* Network::FindMakePrjn ( Layer* recv, Layer* send, ProjectionSpec* ps = __null, ConSpec* cs = __null, bool& nw_itm = nw_itm_def_arg )

find a projection between two layers using given specs, make it if not found; if existing prjn between layers exists, it will be modified with current specs

Show Source Code

Projection* Network::FindMakePrjnAdd ( Layer* recv, Layer* send, ProjectionSpec* ps = __null, ConSpec* cs = __null, bool& nw_itm = nw_itm_def_arg )

find a projection between two layers using given specs, make it if not found; if existing prjn between layers exists but has diff specs, a new prjn is made

Show Source Code

Projection* Network::FindMakeSelfPrjn ( Layer* recv, ProjectionSpec* ps = __null, ConSpec* cs = __null, bool& nw_itm = nw_itm_def_arg )

find a self projection using given specs, make it if not found; if existing self prjn exists, it will be modified with current specs

Show Source Code

Projection* Network::FindMakeSelfPrjnAdd ( Layer* recv, ProjectionSpec* ps = __null, ConSpec* cs = __null, bool& nw_itm = nw_itm_def_arg )

find a self projection using given specs, make it if not found; if existing self prjn exists but has diff specs, a new prjn is made

Show Source Code

BaseSpec* Network::FindMakeSpec ( taString& nm, TypeDef* td, bool& nw_itm = nw_itm_def_arg )

find a given spec and if not found, make it

Show Source Code

BaseSpec_Group* Network::FindMakeSpecGp ( taString& nm, bool& nw_itm = nw_itm_def_arg )

find a given spec group and if not found, make it

Show Source Code

BaseSpec* Network::FindSpecName ( taString& nm )

find a given spec by name

Show Source Code

BaseSpec* Network::FindSpecType ( TypeDef* td )

find a given spec by type

Show Source Code

void Network::GetLocalistName ( )

look for a receiving projection from a single unit, which has a name: if found, set our name to that name

Show Source Code

void Network::IconifyAllLayers ( )

iconify all of the layers in the network (turns on ICONIFIED flag, shrinks layers to size of 1 unit in the network display, or makes them invisible if lesioned)

Show Source Code

void Network::LayerPos_Cleanup ( )

cleanup the layer positions relative to each other (prevent overlap etc) -- sets layers to use relative positioning based on their current relative positions if they overlap

Show Source Code

void Network::LayerPos_GridLayout_3d ( int x_space = 2, int y_space = 3, int z_size = 3, int gp_grid_x = -1, int lay_grid_x = -1 )

for the 3D layer positions: arrange layers and layer subgroups into a grid with given spacing, distributed across given number of z (vertical) layers, and you can optionally constrain the x (horizontal) dimension of the grid for the subgroups within the network or layers within groups (or just the layers if no subgroups) if gp_grid_x > 0 or layer_grid_x > 0

Show Source Code

void Network::LayerPos_RelPos ( )

update relative positioning of units in the layer according to any active pos_rel settings -- also checks for any loops and breaks them -- called automatically during build

Show Source Code

void Network::LayerZPos_Unitize ( )

set layer z axis positions to unitary increments (0, 1, 2.. etc)

Show Source Code

void Network::LesionAllLayers ( )

lesion all of the layers in the network (turns on LESIONED flag)

Show Source Code

int Network::LesionCons ( float p_lesion, bool permute = true )

remove connections with prob p_lesion (permute = fixed no. lesioned)

Show Source Code

int Network::LesionUnits ( float p_lesion, bool permute = true )

turn on unit LESIONED flags with prob p_lesion (permute = fixed no. lesioned)

Show Source Code

taString Network::NetPrjnsToList ( taMarkUp::Format fmt, bool include_off = false )

record the network projection structure to a mark-up formatted list, indented by layer group (if present), layer, then projections including the connection and projection specs used -- optional whether to include lesioned layers and projections that have the off flag marked

Show Source Code

DataTable* Network::NetPrjnsToTable ( DataTable* dt = __null, bool include_off = false )

record the network projection structure to given data table, with one row per projection per layer, including the connection and projection specs used and notes -- optional whether to include lesioned layers and projections that have the off flag marked

Show Source Code

void Network::NetStructFmTable ( DataTable* dt )

configure network structure (layer and layer group names, sizes, positions, connectivity) from data table (should be in same format as generated by NetStructToTable)

Show Source Code

DataTable* Network::NetStructToTable ( DataTable* dt = __null, bool list_specs = false )

record the network structure to given data table, including names of layers and layer groups, sizes, and where each layer receives projections from and sends projections to -- if list_specs also include columns for layer and unit specs

Show Source Code

Layer* Network::NewLayer ( )

create a new layer in the network, using default layer type

Show Source Code

int Network::ProbAddCons ( float p_add_con, float init_wt = 0.0 )

probabilistically add new connections (assuming prior pruning), init_wt = initial weight value of new connection

Show Source Code

int Network::PruneCons ( SimpleMathSpec& pre_proc, Relation::Relations rel, float cmp_val )

remove weights that (after pre-proc) meet relation to compare val

Show Source Code

bool Network::RecvOwnsCons ( )

does the receiver own the connections (default) or does the sender?

Show Source Code

void Network::RemoveCons ( )

Remove all connections in network -- generally should not be called separately -- use UnBuild() to cleanly remove everything

Show Source Code

bool Network::RemoveLayer ( taString& nm )

remove layer with given name, if it exists

Show Source Code

bool Network::RemovePrjn ( Layer* recv, Layer* send, ProjectionSpec* ps = __null, ConSpec* cs = __null )

remove a projection between two layers, if it exists

Show Source Code

void Network::RemoveUnits ( )

synonym for UnBuild -- remove all units in network -- also calls RemoveCons()

Show Source Code

int Network::ReplaceConSpec ( ConSpec* old_sp, ConSpec* new_sp, bool prompt = true )

switch any connections/projections using old_sp to using new_sp

Show Source Code

int Network::ReplaceLayerSpec ( LayerSpec* old_sp, LayerSpec* new_sp, bool prompt = true )

switch any layers using old_sp to using new_spec -- optionally prompt for each replacement

Show Source Code

int Network::ReplacePrjnSpec ( ProjectionSpec* old_sp, ProjectionSpec* new_sp, bool prompt = true )

switch any projections using old_sp to using new_sp

Show Source Code

void Network::ReplaceSpecs ( BaseSpec* old_sp, BaseSpec* new_sp, bool prompt = true )

replace a spec of any kind, including iterating through any children of that spec and replacing all those with corresponding child in new spec

Show Source Code

void Network::ReplaceSpecs_Gp ( BaseSpec_Group& old_spg, BaseSpec_Group& new_spg, bool prompt = true )

replace a specs on two matching spec groups, including iterating through any children of each spec

Show Source Code

int Network::ReplaceUnitSpec ( UnitSpec* old_sp, UnitSpec* new_sp, bool prompt = true )

switch any units/layers using old_sp to using new_sp

Show Source Code

void Network::SetUnitNames ( bool force_use_unit_names = false )

for all layers, set unit names from unit_names matrix (called automatically on Build) -- also ensures unit_names fits geometry of layer -- if force_use_unit_names is true, then unit_names will be configured to save values it is not already

Show Source Code

void Network::SetUnitNamesFromDataTable ( DataTable* unit_names_table, int max_unit_chars = -1, bool propagate_names = false )

label units in the network based on unit names table -- also sets the unit_names matrix in the layer so they are persistent -- max_unit_chars is max length of name to apply to unit (-1 = all) -- if propagate_names is set, then names will be propagated along one-to-one projections to other layers that do not have names in the table (GetLocalistName)

Show Source Code

void Network::SyncSendPrjns ( )

synchronize sending projections with the recv projections so everyone's happy

Show Source Code

void Network::UnBuild ( )

un-build the network -- remove all units and connections -- network configuration is much faster when operating on an un-built network

Show Source Code

void Network::UnLesionAllLayers ( )

un-lesion all of the layers in the network (turns off LESIONED flag)

Show Source Code

void Network::UnLesionUnits ( )

un-lesion units: turn off all unit LESIONED flags

Show Source Code

void Network::UpdatePrjnIdxs ( )

fix the projection indexes of the connection groups (recv_idx, send_idx)

Show Source Code

DataTable* Network::VarToTable ( DataTable* dt, taString& variable )

send given variable to data table -- number of columns depends on variable (if a network, one col, if a layer, number of layers, etc). for projection data, specify: prjns.xxx for weight values, specify r. or s. (e.g., r.wt) -- this uses a NetMonitor internally (just does AddNetwork with variable, then gets data), so see documentation there for more information

Show Source Code

bool Network::VarToVal ( taString& dest_var, float val )

set variable to given value for all units within this network (must be a float type variable)

Show Source Code

bool Network::VarToVarCopy ( taString& dest_var, taString& src_var )

copy one unit variable to another (un->dest_var = un->src_var) for all units within this network (must be a float type variable)

Show Source Code

DataTable* Network::WeightsToTable ( DataTable* dt, Layer* recv_lay, Layer* send_lay )

send entire set of weights from sending layer to recv layer in given table (e.g., for analysis), with one row per receiving unit, and the pattern in the event reflects the weights into that unit

Show Source Code

Method Category: UserData

void taOBase::RemoveAllUserData ( )

get rid of our user data list entirely -- this is done automatically when saving something that has no user data items, but you can also force it with this method -- deletes the whole list

Show Source Code

Method Category: _NoCategory

bool taNBase::AddFromTemplate ( taBase* obj, bool& is_acceptable )

handles drops from toolbar - when adding an object to a program, network, etc - e.g. dropping generic data table onto a program - set is_acceptable for objects 'not handled' but which are acceptable

Show Source Code

void Network::ClearIntact ( )

call this when any element in the network is updated such that the current built status is no longer valid

Show Source Code

void Network::ClearNetFlag ( Network::NetFlags flg )

clear flag state (set off)

Show Source Code

TypeDef* Network::ConStateType ( )

each type of Network MUST override this to match type of state it uses

Show Source Code

bool Network::EditState ( )

edit the network state values that drive actual C++ computation

Show Source Code

taBase* taOBase::GetOwner ( )

Show Source Code

TypeDef* Network::GetTypeDef ( )

Show Source Code

bool Network::HasNetFlag ( Network::NetFlags flg )

check if flag is set

Show Source Code

bool Network::IsBuiltIntact ( )

is this network currently built and intact?

Show Source Code

bool Network::IsIntact ( )

is this network currently intact?

Show Source Code

TypeDef* Network::LayerStateType ( )

each type of Network MUST override this to match type of state it uses

Show Source Code

bool Network::LoadFmWeights ( Weights* wts, bool quiet = false )

load weight values from given weights object

Show Source Code

TypeDef* Network::NetworkStateType ( )

each type of Network MUST override this to match type of state it uses

Show Source Code

TypeDef* Network::PrjnStateType ( )

each type of Network MUST override this to match type of state it uses

Show Source Code

void Network::SaveToWeights ( Weights* wts )

write weight values out to given weights object (NULL = make a new one)

Show Source Code

void Network::SetNetFlag ( Network::NetFlags flg )

set flag state on

Show Source Code

void Network::SetNetFlagState ( Network::NetFlags flg, bool on )

set flag state according to on bool (if true, set flag, if false, clear it)

Show Source Code

void Network::SpecComparePeers ( BaseSpec* key_spec, BaseSpec* peer_spec )

creates a table with a column of values for key_spec and another column for peer_spec - values for peer_spec are shown if different from key_spec - if table with key_spec exists a call with a new peer adds a column to the table

Show Source Code

void Network::SpecCompareWithChildren ( BaseSpec* parent_spec )

creates a table with a column of values for the parent spec and each child spec - values are shown if is on and if a child spec also checks override - if both are true the value is displayed

Show Source Code

TypeDef* Network::UnGpStateType ( )

each type of Network MUST override this to match type of state it uses

Show Source Code

TypeDef* Network::UnitStateType ( )

each type of Network MUST override this to match type of state it uses

Show Source Code

taSigLink** taOBase::addr_sig_link ( )

Show Source Code

Static Member and Method Documentation

Static Members

Static Methods


Static Method Documentation


Expert Member and Method Documentation

Expert Members

Expert Member Category: Learning

Expert Member Category: Specs

Expert Member Category: State

Expert Member Category: Statistic

Expert Member Category: Structure

Expert Member Category: _NoCategory

Expert Member Category: taBase

Expert Methods

Expert Method Category: Counter

Expert Method Category: File

Expert Method Category: Statistic

Expert Method Category: UserData


Expert Member Documentation

Expert Member Category: Learning

Network::small_batch_n_eff : int

effective batch_n value = batch_n except for dmem when it = (batch_n / epc_nprocs) >= 1

Expert Member Category: Specs

Network::n_con_specs_built : int

number of specs

Network::n_layer_specs_built : int

number of specs

Network::n_prjn_specs_built : int

number of specs

Network::n_unit_specs_built : int

number of specs

Expert Member Category: State

Network::con_state_size : int

size in *bytes* of con group objects actually built

Network::hash_value : byte_Array

unique hash code value of network including the indexes, sizes, connectivity, and optionally weights -- used to guarantee identical state of networks across dmem / mpi for example

Network::layer_state_size : int

size in *bytes* of the layer_state LayerState

Network::layer_state_sync : NetStateSync_List

handles optimized state sync for layer object

Network::main_obj : bool

true if this is a main-side object (emergent, TA-enabled) as opposed to a State-side object

Network::n_layers_built : int

number of state layers when built -- size of state_layers array

Network::n_prjns_built : int

number of state projections when builtsize of state_layers array

Network::n_thrs_built : int

number of threads that the network was built for -- must use this number of threads for running network, and rebuild if the number changes

Network::n_ungps_built : int

number of state unit groups when built -- size of state_ungps array

Network::n_units_built : int

number of units built -- actually the n+1 size of units_flat

Network::net_state_sync : NetStateSync_List

handles optimized state sync for network object

Network::prjn_state_size : int

size in *bytes* of the prjn_state LayerState

Network::prjn_state_sync : NetStateSync_List

handles optimized state sync for projection object

Network::state_con_specs : taBase_RefList

con_specs that have been built for running network

Network::state_layer_specs : taBase_RefList

layer_specs that have been built for running network

Network::state_layers : taBase_RefList

layers that have been built for running network

Network::state_prjn_specs : taBase_RefList

prjn_specs that have been built for running network

Network::state_prjns : taBase_RefList

prjns that have been built for running network

Network::state_unit_specs : taBase_RefList

unit_specs that have been built for running network

Network::ungp_state_size : int

size in *bytes* of the ungp_state LayerState

Network::unit_state_size : int

size in *bytes* of the UnitState

Expert Member Category: Statistic

Network::cur_cnt_err : float

current cnt_err -- used for computing cnt_err

Network::cycle_time : TimeUsed

time used for computing a cycle (managed entirely by programs -- not always used)

Network::epc_prerr : PRerrVals

precision and recall error values for the entire network, over an epoch or similar larger set of external input patterns

Network::epoch_time : TimeUsed

time used for computing an epoch (managed entirely by programs -- not always used)

Network::group_time : TimeUsed

time used for computing a group, when groups used (managed entirely by programs -- not always used)

Network::misc_time : TimeUsed

misc timer for ad-hoc use by programs

Network::net_timing : NetTiming_List

timing for different network-level functions -- per thread, plus one summary item at the end

Network::prerr : PRerrVals

precision and recall error values for the entire network, for the current external input pattern

Network::settle_time : TimeUsed

time used for computing a settling (managed entirely by programs -- not always used)

Network::sum_prerr : PRerrVals

precision and recall error values for the entire network, over an epoch or similar larger set of external input patterns -- these are always up-to-date as the system is aggregating, given the additive nature of the statistics

Network::train_time : TimeUsed

time used for computing entire training (across epochs) (managed entirely by programs -- not always used)

Network::trial_time : TimeUsed

time used for computing a trial (managed entirely by programs -- not always used)

Network::wt_sync_time : TimeUsed

time used for the DMem_SumDWts operation (trial-level dmem, computed by network)

Expert Member Category: Structure

Network::layers : Layer_Group

Layers or Groups of Layers

Network::max_disp_size : PosVector3i

maximum display size in each dimension of the net

Network::max_disp_size2d : PosVector2i

maximum display size in each dimension of the net -- for 2D display

Network::max_prjns : int

maximum number of prjns per any given layer or unit in the network

Network::param_seqs : ParamSeq_Group

parameter sequences keyed off of epoch -- supports automatic arbitrary parameter changes whenver the network epoch is incremented

Network::spec_tables : DataTable_Group

Tables comparing parent and child specs

Network::specs : BaseSpec_Group

Specifications for network parameters

Network::weights : Weights_List

saved weights objects

Expert Member Category: _NoCategory

Network::brain_atlas_name : taString

the name of the brain atlas that we're using -- this is what is actually saved b/c the ref is not saveable

Network::needs_prjn_pass2 : bool

tmp flag managed by ProjectionSpec Connect_Cons to determine if any projections need a second pass (i.e., if any respond false for pass = 1)

Network::needs_wt_sym : bool

tmp flag managed by Init_Weights to determine if any connections have the wt_limits.sym flag checked and thus need weight symmetrizing to happen

Network::net_state : NetworkState_cpp*

our C++ network state -- handles full implementation

Expert Member Category: taBase

taFBase::file_name : taString

The most recent file saved or loaded in association with this object.

taOBase::owner : taBase*

pointer to owner

taOBase::user_data_ : UserDataItem_List*

storage for user data (created if needed) DO NOT ACCESS this list directly -- use the GetUserData / SetUserData etc interface!


Expert Method Documentation

Expert Method Category: Counter

void Network::Init_Counters ( )

initialize all counter variables on network (called in Init_Weights; except batch because that loops over inits!)

Show Source Code

Expert Method Category: File

int Network::Save_strm ( ostream& strm, taBase* par = __null, int indent = 0 )

Save object data to a file stream

Show Source Code

Expert Method Category: Statistic

void Network::Init_Stats ( )

initialize statistic variables on network

Show Source Code

void Network::Init_Timers ( )

initialize statistic variables on network

Show Source Code

Expert Method Category: UserData

UserDataItem_List* taOBase::GetUserDataList ( bool force = false )

gets the userdatalist for this class

Show Source Code


Copyright © 2017Regents of the University of Colorado, Carnegie Mellon University, Princeton University.
emergent 8.2.2