Subsections
map3d.c
Description:
The task of this routine is to decompose the geometrical blocks (pbs) generated by grid generators icem or igg into a parallel block structure (pbs) and to distribute the new blocks to the different processors. While reading this discription it is helpful to examine the documentation for subroutine readmap.F (especially for understanding the patches and transfer tables).
Definitions:
- numbering of faces:
1 |
west |
6 |
east |
2 |
south |
5 |
north |
3 |
bottom |
4 |
top |
- gbs = global block structure:
referring to the block structure defined by the grid generator
- pbs = parallel block structure:
referring to the block structure for parallelization
- gid = global index direction:
referring to indices of parent block (of patch) (3D)
0 is first global index direction (i)
1 is second global index direction (j)
2 is third global index direction (k)
- lid = local index direction:
referring to indices on a patch (2D)
0 is first local index direction
1 is second local index direction
global
local index direction
face |
loc first(0) |
loc sec (1) |
|
|
|
|
1,6 |
1 |
2 |
i const, |
i j , |
j k |
|
2,5 |
2 |
0 |
j const, |
i k , |
j i |
|
3,4 |
0 |
1 |
k const, |
i i , |
j j |
|
- int anchor[3]
global indices of grid point in geometric block where parallel block
starts, i.e. offset for parallel block in parent block
computed in map-routine out of length of pbs-blocks in parent block
Definitions of used structures :
- struct gbs_block: geometrical block (from grid generator)
- struct gbs_patch: patch on gbs-block
- struct pbs_block: parallel block computed in map3d
- struct pbs_patch: patch on pbs-block
- struct pbs_con_patch: connectivity patch on pbs-block
- struct pbs_bc_patch: boundary condition patch on pbs-block
- struct pbs_divline: linie, an der block geteilt wird
- struct point
Structure of routine map3d.c:
First the grid file coming from the icem or igg grid
generator with the geometrical block structure is read to get
information about geometry, geometrical blocks (number of blocks) and
grid coordinates (dimensions in i-, j-, k-direction). Then the
md-file is read to get information:
- nb of multigrid levels
- nb of processors
- mapping strategy (see below)
- location of monitoring point
- nb and description of flow regions
- number and description of rotating regions
- clicking grids
- connectivity accuracy (for consistency check)
- allocation (how much memory is needed)
Then subroutine read_tbc is called for velocities and
temperature. It reads the topology and the BC from a tbc-file
generated by icem. The information for the different patches is stored
in an array of patch structures (*p_gbs_patch_tab). The
information contains boundary condition on the patch, number of parent
block, number of face of parent block, local min and max indices in i-
and j-direction, number of neighbour patch, orientation
of neighbour patch.
Then checking for consistency of patches (all faces covered with
patches, neighbouring patches have the same coordinates along
boundary).
In the following the blocks are partitioned and distributed to the
processors according to the entries in the md-file
(partitioning strategies):
- oneproc
parallel block is equal to geometrical block
nothing
to be done
- map9
information from map/myproject.md:
### processor info
1 1 2 0 #1 block
1 2 1 0 #2 block
...
1 1 1 0 #n block
first three numbers:
number of blocks in i-, j-, k-direction (1 means: no division)
last number:
0: all new blocks created by division (pbs-blocks) are distributed
to actual processor (iproc)
1: all pbs-blocks are distributed to next processor (iproc+1)
2: every new pbs-block is distributed to a new processor
All blocks must have the same number of CVs.
- map8
like map9, but number of CV may differ in the pbs-blocks.
To be improved! (first nblock-1 get the same #CVs, block
nblock gets the rest)
- map7
in map/myproject.md is explicitly set which block is
distributed to which processor
### processor info
1 1 1 : 1 |
#1 block |
2 2 1 : 1 2 3 4 |
#2 block |
... |
|
1 1 1 : 1 |
#n block |
For each pbs-block (l=1,nblock) the mapping routines compute
the following numbers (*p_pbs_block_tab is array of
structs with gbs block information):
- (*pp_pbs_block_tab)[l].dim[0]
#CV+1 in i-direction
- (*pp_pbs_block_tab)[l].dim[1]
#CV+1 in j-direction
- (*pp_pbs_block_tab)[l].dim[2]
#CV+1 in k-direction
- (*pp_pbs_block_tab)[l].gbs_iblock
nb of parent block
- (*pp_pbs_block_tab)[l].anchor[0]
i-coordinate for anchor
- (*pp_pbs_block_tab)[l].anchor[1]
j-coordinate for anchor
- (*pp_pbs_block_tab)[l].anchor[2]
k-coordinate for anchor
- (*pp_pbs_block_tab)[l].iproc
distributed to which processor
Now we have the parallel block structure with the blocks assigned to
the different processors.
The next task is to build up the list of new patches, one for
velocity, one for temperature.
This is done in two steps:
build_pbs_patch_list_stage1: patches and connections are
adapted to the (new) pbs-block structure, i.e. that the patches are
devided according to the divline-list. The divline-list gives
information about the block number, the global index direction and the
gid-index where to divide the patch. It is set up according to the
endpoints of gbs- and pbs-blocks. Then check for consistency.
build_pbs_patch_list_stage_2: devides patches, so that
the patch-topology is unique. The divline-list is built according to
the endpoints of the patches. This may result in a very high number of
pbs-patches so routine contract_pbs_con_patch_list is
called to merge the patches who are redundently devided.
Then subroutine set_mon sets the pbs block number, processor
number and pbs indices for the monitoring point in each pbs-block.
After that subroutine set_pre sets the pressure reference
points.
At the end subroutine create_map creates the output file
which contains the new parallel block structure, patches etc, which is
read by fmg3d.F.