Высокопроизводительные параллельные вычисления на...

438
Федеральное агентство по образованию Государственное образовательное учреждение высшего профессионального образования Владимирский государственный университет ВЫСОКОПРОИЗВОДИТЕЛЬНЫЕ ПАРАЛЛЕЛЬНЫЕ ВЫЧИСЛЕНИЯ НА КЛАСТЕРНЫХ СИСТЕМАХ Материалы Девятой международной конференции-семинара Владимир, 2–3 ноября 2009 г. Владимир 2009

Transcript of Высокопроизводительные параллельные вычисления на...

  • -

    , 23 2009 .

    2009

  • 681.3.012:51 32.973.26018.2:22 93

    : .. ( ) .-. ,

    .. .-. , . .. .-. , .. .-. , .. .-. ,

    Intel Technologies

    Microsoft

    American Power Conversion

    -

    - Parallel.Ru

    - - , 23 2009 ., - , - : - , - , . - , , , -.

    ISBN 978-5-89368-958-7 , 2009

  • 3

    23 2009 . - . .. .. - . .. , Intel, Microsoft, APC - -.

    -:

    - ;

    ;

    , - ;

    GRID-; -

    ; -

    .

    , - - - , - , - - .

    ,

    , , ( ).

    09-01-06108-.

  • 4

    MPI Profiling with the Sun Studio Performance Tools Marty Itzkowitz, Yukon Maruyama, Vladimir Mezentsev

    Sun Microsystems, 16 Network Circle, Menlo Park, CA 94025, USA 1 Introduction

    This paper describes the various techniques implemented in the Sun Studio Performance Tools to profile MPI applications. We describe the characteristics of the MPI programming model, and review the specific performance issues with this model, and show how the tools can help.

    1.1 The Sun Studio Performance Tools The Sun Studio Performance Tools are designed to collect performance data on fully

    optimized and parallelized applications written in C, C++, Fortran, or Java, and any combina-tion of these languages. Data is presented in the context of the user's programming model.

    The tools support code compiled with the Sun Studio or GNU compilers. They also work on code generated by other compilers, as long as those compilers produce compatible standard ELF and DWARF symbolic information.

    The tools run on the Solaris or Linux operating systems, on either SPARC or x86/x64 processors. The current version, Sun Studio 12 update 1, is available for free download [1].

    1.1.1 The Sun Studio Performance Tools Usage Model The usage model for the performance tools consists of three steps. First, the user

    compiles the target code. No special compilation is needed, and full optimization and parallelization can be used. It is recommended that the -g flag be used to get symbolic and line-number information into the executable. (With the Sun Studio compilers, the -g flag does not appreciably change the generated code.)

    The second step is to collect the data. The simplest way to do so is to prepend the collect command with its options to the command to run the application. The result of running collect is an experiment which contains the measured performance data. With appropriate options, the data collection process has minimum dilation and distortion, typically about 5%, but somewhat larger for MPI runs.

    The third step in the user model is to examine the data. Both a command-line program, er_print, and a GUI interface, analyzer, can be used to examine the data. Much of the complexity introduced into the execution model of the code comes from optimizations and transformations performed by the compiler. The Sun compilers insert significant compiler commentary into the compiled code. The performance tools show the commentary, allowing users to understand exactly what transformations were done.

    2 MPI Performance Issues MPI programs run as a number of distinct processes, on the same or different nodes of a

    cluster. Each process does part of the computation, and the processes communicate with each other by sending messages.

    The challenge in parallelizing a job with MPI is to decide how the work will be parti-tioned among the processes, and how much communication between the processes is needed to coordinate the solution. To address these aspects of MPI performance, data is needed on the overall application performance, as well as on specific MPI calls.

    Communication issues in MPI programs are explicitly addressed by tracing the applica-tion's calls to the MPI runtime API. The data is collected using the VampirTrace [2] hooks, augmented with callstacks associated with each call. Callstacks are directly captured, obviat-ing the need for tracing all function entries and exits, and resulting in lower data volume.

    MPI tracing collects information about the messages that are being transmitted and also generates metrics reflecting the MPI API usage: MPI Time, MPI Sends, MPI Receives, MPI

  • 5

    Bytes Sent and MPI Bytes Received. Those metrics are attributed to the functions in the call-stack of each event.

    Unlike many other MPI performance tools, the Sun Studio Performance Tools can col-lect statistical profiling data and MPI trace data simultaneously on all the processes that com-prise the MPI job. In addition, during clock-profiling on MPI programs, state information about the MPI runtime is collected indicating whether the MPI runtime is working or waiting. State data is translated into metrics for MPI Work Time and MPI Wait Time. State data is available only with the Sun HPC ClusterTools 8.1 (or later) version of MPI, but trace and profile data can be captured from other versions of MPI.

    2.1 Computation Issues in MPI Programs The computation portion of an MPI application may be single-threaded or multi-

    threaded, either explicitly or using OpenMP. The Sun Studio Performance Tools can analyze data from the MPI processes using any of the techniques described in the previous sections for single- and multi-threaded profiles. The data is shown aggregated over all processes, al-though filtering can be used to show any subset of the processes. Computation costs are shown as User CPU Time (with clock-profiling); computation costs directly attributable to the MPI communication are shown as MPI Work time, a subset of User CPU Time. Time spent in MPI is shown as MPI Time, which represents the wall-clock time, as opposed to the CPU Time, spent within each MPI call.

    2.2 Parallelization Issues in MPI Programs Problems in partitioning and MPI communication can be recognized by excessive time

    spent in MPI Functions. The causes of too much time in MPI functions may include: load imbalance; excessive synchronization; computation granularity that is too fine; late posting of MPI requests; and limitations of the MPI implementation and communication hardware.

    Many MPI programs are iterative in nature, either iterating on a solution until numerical stability is reached, or iterating over time steps in a simulation. Typically, each iteration in the computation consists of a data receive phase, a computation phase, and a data send phase reporting the results of the computation.

    3 Using The Sun Studio Performance Tools to Analyze MPI programs 3.1 Using the MPI Timeline to Visualize Job Behavior

    The MPI Timeline gives a broad view of the application behavior, and can be used to identify patterns of behavior and to isolate a region of interest.

    3.2 Using MPI Charts to Understand Where Time Was Spent The Analyzer's initial MPI Chart shows in which MPI function the time is spent. The MPI Charts can be used to understand the patterns of communication between

    processes. If some processes are running slower than others, or if the behavior is consistent over

    time, the MPI Charts provide a powerful way to explore these types of issues. 3.3 Using Filters to Isolate Behaviors of Interest

    The MPI filters can be used to pick out behaviors of interest and determine which events are responsible.

    References 1. Sun Studio Downloads, http://developers.sun.com/sunstudio/downloads/index.jsp 2. VampirTrace, http://www.tu-dresden.de/zih/vampirtrace

  • 6

    - MPI .. , ..

    , [email protected], [email protected]

    -

    ( )[1], - - . - - MPI [2].

    MPI , , - O(N2), N- MPI-, . -, RemoteProcedureCall (RPC) -- - - MPI [4]. , , - , - MPI - .

    -

    - - . , - . ( ) : - /; - - , ; DRAM-. - J7-2[1], - 1.

    , , (-) - [3].

    -, -, . - . - () , : , - , .

    -, 64- - f/e- , . . f/e- .

  • 7

    synchronize f/e- empty , , , f/e- full . empty , . - . f/e- - empty.

    1. - ,

    J7-2 , 0.5 / 2/64 , 4 , /way 1/4 -, / 64 DRAM-, / 25.6 4D-, / 4

    RemoteProcedureCall (RPC) - - - . - RPC. - , - - .

    - Charm++ , 512 -100k, . .

    MPI

    MPI MPI- - J7, .. , - [3]. J7, , 4 , . - . , MPI-, . , - MPI ( , ), . - , , MPI- . MPI_Init, RPC. , MPI- . . - run-time . J7 64 .

    MPI , - MPI-, MPI-,

  • 8

    , . - MPI- .

    MPI- -

    . - 64- , 32- : (head) --- (tail) . head tail - . - . - . . .. , , .

    head tail 32 32

    . : valid, (64 ); header MPI (264 ), MPI_Request (64 ); data (364 ). valid 0, , 1 . (tail) . , f/e- valid full. - , f/e- valid empty.

    valid header header p_MPI_Request data data data

    , (24 -

    ). . , . - . - - -. , 28 . 28, 210 213 - MPI- -.

    tail, -

    32 . - , - . , , RPC . , RPC. , . synchronize, valid f/e full.

    synchronize. , , f/e valid full. valid 1, , 0, . , , . , , -

  • 9

    ( f/e empty) synchronize, .

    , -. , , head . , . - , .

    - MPI_Send MPI_Recv -

    . MPI_Isend MPI_Irecv , . MPI_Isend MPI--. MPI_Irecv MPI-, MPI_Irecv.

    MPI

    . . 1 PingPong HP-MPI 2.02 Infiniband DDR ( 2Gbytes/sec), AMD Opteron 8431 Istanbul 2,4GHz. , , , - MPI.

    PingPong

    0

    500

    1000

    1500

    2000

    2500

    3000

    8 16 32 64 128

    256

    512

    1024

    2048

    4096

    8192

    1638

    4

    3276

    8

    6553

    6

    1310

    72

    2621

    44

    5242

    88

    1048

    576

    2097

    152

    4194

    304

    , bytes

    Angara MPIHP MPIMPICH M2MPICH M3

    1. PingPong.

    , 4 MPI -

    , - . - .

    2 3, . 1 MPICH-2 1.1.1 . 2 , - 2D- ( 32), 6 Gbits/sec, Pentium 4 3.0 GHz, PCI-Express 4x, Virtex4. 3 , - 2D-, 13 Gbits/sec, PCI-Express 8x, Virtex5.

  • 10

    MPI

    - . - O(N) - (N- MPI-); - Send Recv , ; - ; - , -/ .

    1. ., . -

    // . 2007. 9. C. 4251. 2. MPI: A Message-Passing Interface Standard, Proc. Intl J. Supercomputer Applications

    and High Performance Computing, MIT Press, 1994, pp. 159-416. 3. . ., .., . . -

    - . // : , , . 2009. 1. . 50-61.

    4. Patrick Geoffray, A Critique of RDMA, Myricom Inc, August 18, 2006.

  • 11

    PowerXCell8i. SPE-

    .. , .. ,

    ( --

    ) -, (). - x86, 50 GFlop/s. , IBM Cell [1-3] NVIDIA GPU [4, 5], , , 150 GFlop/s 1 TFlop/s.

    [6, 7] - - Cell. PPE- -, PPE, SPE . -, PPE , SPE. SPE- , SPE - , , .

    MOLKERN [8].

    erfc(r)/r erf(r)/r - P3M [9]. -- 6-12 -. , - - O(N2) O(NlogN).

    - [6, 7].

    PowerXCell8i Cell Broadband Engine (CBEA)

    Sony, Toshiba IBM. - , . PowerXCell8i [10] ( Cell) - , 9 . PPE (PowerPC Processor Element) PowerPC, - . , SPE (Synergistic Processor Element), - PPE , -. SPE , LS. LS 256 , , .

  • 12

    , Cell DMA-, DMA- . SPE 128 128- , , SIMD-, . .

    PeakCell S [11]. PowerXCell8i 3.2 , 16 Fedora 7. FlexIO, 20 /. NUMA (Non-Uniform Memory Architecture).

    runtime-

    libspe2 IBM SDK for Multicore acceleration for Linux [12] Pthreads SPE-. SPE- , , , . - SPE- PPE, - PPE.

    dU__dX() SPE, SIMD-. , . , - -, inline- MASS.

    SPE- SPE- PPE-, -

    , PPE. SPE , - . , SPE - SPE. , .

    SPE- PPE- . put(), calculate(), - get() (. . 1). , PPE- , put() PPE , , . SPE PPE . SPE, - PPE, , [6], SPE , get().

  • 13

    1. SPE- .

    SPE- , -

    put() calculate(). put() calculate() get(). ( ) - . , -. Cell - Cell - . put() SPE - DMA- . - 4 , SIMD-, SIMD-.

    2

    SPE. - 1GC1 , 124 723 25.0 .

    PPE- - 8 SPE PPE 32 ; - AMD Athlon X2, 2.6 6 . SPE- . 16 SPE 149 30 PPE AMD . , SPE- - SPE, . - Cell , , - NUMA PeakCell S.

    SPE-

    . SPE ( 8 SPE),

  • 14

    . - 149- , PPE 30- AMD Athlon X2.

    2. dU__dX()

    SPE. () PPE, () AMD Athlon X2, 2.6 .

    :

    26 , - - 113 - , - . T-platforms (http://www.t-platforms.ru) PeakCell S - PowerXCell8i.

    1. Olivier S., Prins J., Derby J. Porting the GROMACS Molecular Dynamics Code to the

    Cell Processor // 21st International Parallel and Distributed Processing Symposium (IPDPS 2007). Proceedings. Long Beach, California, USA, 26-30 March 2007. P. 18.

    2. Shi G., Kindratenko V. Implementation of NAMD molecular dynamics non-bonded force-field on the Cell Broadband Engine processor // 9th IEEE International Workshop on Par-allel and Distributed Scientific and Engineering Computing (PDSEC 2008). Proceedings. 2008.

    3. Using Cell Broadband Engine Technology to Improve Molecular Modeling Applications // SimBioSys: White Papers. 2009. [Electronic resource]. URL: http://www.simbiosys.ca/science/white_papers/IBM_eHiTS_BLW03019USEN_1.1.pdf (date of access 09.29.2009).

  • 15

    4. Anderson J. A., Lorenz C. D., Travesset A. General Purpose Molecular Dynamics Simula-tions Fully Implemented on Graphics Processing Units // J. Comput. Science. 2008. Vol. 227. P. 5342.

    5. Liua W., Schmidt B., Vossa G., Mller-Wittig W. Accelerating molecular dynamics simu-lations using Graphics Processing Units with CUDA // Computer Physics Communica-tions. 2008. Vol. 179, 9. P. 634641.

    6. .., .. - MOLKERN Cell // - (2009): . , 30 3 2009 . : . , 2009. . 772777.

    7. Fomin E., Alemasov N. Implementation of Non-bonded Interaction Algorithm for the Cell Architecture. // Parallel computing technologies 2009 / Malyshkin V; Springer. LNCS, Vol. 5698. 2009. P. 399405.

    8. . C., . ., . ., . . MOLKERN // . 2006. . 51, . 7. . 110113.

    9. Hockney, R., and Eastwood, J. Computer simulation using particles. New York: McGraw-Hill, 1981.

    10. IBM PowerXCell 8i processor datasheet // IBM: Resources. 2009. [Electronic resource]. URL: http://www-03.ibm.com/technology/resources/technology_cell_pdf_PowerXCell_PB_7May2008_pub.pdf (date of access 09.29.2009).

    11. PeakCell S // -. 2009. [ ]. URL: http://www.t-platforms.ru/ru/tcell/peakcellsserver.html ( 29.09.2009).

    12. Programmer's Guide to the IBM SDK for Multicore Acceleration v3.0 // IBM. 2009. [Electronic resource]. URL: https://www-01.ibm.com/chips/techlib/techlib.nsf/techdocs/1DAAA0A3B6404763002573530066008C/$file/CBE_Programmers_Guide_v3.0.pdf (date of access 09.29.2009).

  • 16

    ..

    [email protected]

    , , - , , . , , - - , , . - -. . 10%. [1] - - , . - - . , - . , , . .

    - . , -, .

    , KOJAK [2] TAU [3] - . , , , , . , , . , - , .

    , , - . (GAS), , , - . Unified Parallel C (UPC) [4], Titanium [5], SHMEM Co-Array Fortran (CAF) [6].

  • 17

    -

    , (PGAS). Unified Parallel C (UPC). , MPI, . , , , , PGAS . [7] - , - , (locality), - . GAS , - send receive , . , , , -, MPI , - [8, 9].

    - , . , GAS. - , (relaxed) , - , . - . - , (source-to-source compilation). , - , , , - . , ( - ), . , - UPC , (GCC-UPC, Cray UPC) , - (Berkeley UPC, HP UPC MuPC). - , , . -, . , - , -, , .

    , GAS . . , .

  • 18

    - . , - . - GAS .

    , UPC (GASP). - , GAS ( 1). , GASP (callback) gasp_event_notify ( 2), GAS - , , - . , . , . , , , - . - , CPU, PAPI [10], - .

    1. GAS ,

    GASP . Callback gasp_event_notify

    , (opaque) , - , - .

    GASP , - , - , . , -

  • 19

    , - .

    2. GASP.

    UPC

    , . - , ( -) ( (bulk) - ) . - , (fence), , - . (work-sharing), , . , - . , , broadcast, scatter (heap). , -, , - . - .

    - . -, --inst --inst-local, , - , -. , , - . , , , - , . #pragma, - . , - - .

    1. Federal Plan for High-End Computing: Report of the High-End Computing Revitalization

    Task Force (HECRTF), 2004, http://www.nitrd.gov/pubs/2004_hecrtf/20040702_hecrtf.pdf.

  • 20

    2. Mohr B., Wolf F.: KOJAK - A Tool Set for Automatic Performance Analysis of Parallel Applications. Proceedings of the International Conference on Parallel and Distributed Computing (Euro-Par 2003). Klagenfurt, Austria (September 2003).

    3. Shende S., Malony A.D.: TAU: The TAU Parallel Performance System. International Journal of High Performance Computing Applications. 20:2 (2006) 287-331.

    4. UPC Consortium: UPC Language Specifications v1.2. Lawrence Berkeley National Lab Tech Report LBNL-59208 (2005).

    5. Yelick K.A., Semenzato L., Pike G., Miyamoto C., Liblit B., Krishnamurthy A., Hilfinger P.N., Graham S.L., Gay D., Colella P., Aiken A.: Titanium: A High-Performance Java Dialect. Concurrency: Practice and Experience, 10:11-13 (1998).

    6. Numrich B., Reid J.: Co-Array Fortran for Parallel Programming. ACM Fortran Forum. 17:2 (1998) 1-31.

    7. DARPA High Productivity Computing Systems (HPCS) Language Effort http://www.highproductivity.org/.

    8. Bell C., Bonachea D., Nishtala R., Yelick K.: Optimizing Bandwidth Limited Problems Using One-Sided Communication and Overlap. 20th International Parallel & Distributed Processing Symposium (IPDPS), 2006.

    9. Datta K., Bonachea D., Yelick K.: Titanium Performance and Potential: an NPB Experi-mental Study. Languages and Compilers for Parallel Computing (LCPC), 2005.

    10. Browne S., Dongarra J., Garner N., Ho G., Mucci P.: A Portable Programming Interface for Performance Evaluation on Modern Processors. International Journal of High Per-formance Computing Applications (IJHPCA), 14:3 (2000) 189-204.

  • 21

    .. , .. , .. , .. - . .., ,

    1.

    . , - .

    - - - . .

    2. -

    ABC, Li2O:SrO:P2O5. -

    , 1200 . . - , , -, - .

    , : , .

    3.

    . - -, , [1]. . . .

    - )(nij . - )(nijT

    )(nAijC ,

    )(nBijC ,

    )(nCijC .

    -, .

    3.1.

    . - :

  • 22

    ( ))()()()()()1( 1 nijnijnklnklnijnij CDCDmCC ++ , (1) D ; m , , -

    t

    xm =2

    ,

    , t ; ),(),( jiOlk ;

    )()( nkl

    nkl CD

    ;

    =lk

    nkl

    nkl

    nkl

    nkl CDCD

    ,

    )()()()(

    61

    , (2)

    :

    ( ) RTEeDTD = 0 , (3) D0 ; ; R .

    , - :

    ( ))()()()()()1( 1 nijnijnklnklnijnij TaTapTT ++ , (4) ;

    , . )()( n

    kln

    kl Ta ;

    =lk

    nkl

    nkl

    nkl

    nkl TaTa

    ,

    )()()()(

    61

    , (5)

    , . - dT.

    dTTT nn ++ )()1( , (6)

    3.2. .

    };{)( nij , - nij = )( . - nij = )( , - - nij = )( nij = )( , ..

  • 23

    nij = )( .

    };{)( nij . -.

    : nij = )( ,

    - ,

    nij = )( ;

    nij = )( nij = )( - , - .

    - . :

    ; ; .

    4.

    , . , , - , , , - . , (), - - , . , , , .

    , - .

    , - . , . , - , , (. 1).

    , , , - .

    : , ID. , , - () . , , - . ,

  • 24

    , , (. 2).

    1. . , , .

    2. .

    MPI.

    5. -

    , Linux Gnome. , .

    , - . - . , - Li2O:SrO:P2O5:

    22:44:34

    23:46:31

    24:48:24

    25:50:25

  • 25

    , - .

    6. 1. -

    ; 2.

    . . C++ MPI, - , C++, - Gnome;

    3. , - ;

    4. .

    . - - .

    ( 02.524.11.4006/7934-), - , 2.1.1/2104.

    1. .., .., .., ..

    . .: , 2001. 408 ., .

  • 26

    ,

    .. , .. , ..

    , {arut, ssg, samov}@ispras.ru

    , , - . - , - .

    2008 Hewlett-Packard - , -.

    , , , - .

    35 -, 21 - . 2009 - 12 - , 3 : , ,

  • 27

    .. , .. ,

    -, , , -. - -.

    },...,{ 1 NXXX = - ( )Tcncc xxX ,...,1= , Nc ,1= , N - . ,

    SMP .

    . , n -, , 2s -, ss (. 1).

    1. .

    ( ) nksjiijk

    wW ,1,1,

    === - , -

    ( )ji, - n - . .

    . - ( ) ( )( )cc XuXv ,

    ( ) ( )( ) 2,

    minarg, ijc

    ji

    cc WXXuXv = , Nc ,1= (1) ijW - W , ( )ji, .

    ...( )1,1

    ( )2,1

    ( )1,2

    ( )2,2

    ( )s,2

    ( )1,s

    ( )2,s

    ( )ss,( )s,1

    ...1 2 ncx1

    cx2 cnx

    ...

    ...

  • 28

    ( )kijckijijkijk wxhww ,+= , Nc ,1= , sji ,1, = , nk ,1= (2)

    ,2

    exp 2

    2

    = ijvu

    ij

    WWh (3)

    - , ijh - , vuW - - -, - [1, 2].

    [1] -

    ( )

    =

    10 exp

    tt , ( )

    =

    20 exp

    tt , (4)

    1,00 = , T=1 , 20s= ,

    02 ln

    T= , (5) t - , Tt ,1= , T - ( 2000500=T ).

    -

    . ( p - ), ( )spG , 0 ( )1p

    ( )

  • 29

    mH 4.

    ( )m ,...,1 ( )uf . i , mi ,1= , , ( )

    21=< jiP

    ( )uf , mji ,1, = , ji . , -

    { }mBm ,...,1= . - .

    ,

    01 =H , 21

    2 =H . (8) 1mB ( )1m , 1mH .

    m - ( 2).

    1 2 1m m 2. m -

    m

    p 1= -

    11 += mm HH .

    =m

    q 11 m - -

    1= mm HH . : ( ) 11 1111

    ++= mmm HmHmH . (9)

    (9) :

    mHH mm

    11 += . (10)

    (8) (10) :

    =

    =m

    im i

    H2

    1 , 2m . (11) .

    (11) (12), - (13):

    ,13212862

    2

    21

    ++++=

    =

    s

    i insNTL (12)

    ( ) ( ) ( ) .231313,212862

    ,

    2max

    max

    +++++++=

    ==pn

    iinspGNTL

    p

    i

    spG

    ip (13)

    3 -.

    maxR 25=p 11,68 .

    pp < p 2s , - .

  • 30

    4 - .

    () ()

    3. (a) () ( 100=T , 100=N , 3=n , 10=s )

    () ()

    4. (a) - () ( 100=T , 100=N , 3=n )

    ,

    maxR p

    ,2,152,0max sR += (14).63,272,0 sp += (15)

    . , , - , -. - .

    1. C. . : , 2- : . . - : -

    , 2006. 1104 . 2. .. , .. . .

    : , 2002. 317 . 3. . . . . 3. : .

    . : , 1978. 841 .

    pLLR 1=

    pLLEp

    1=

  • 31

    .. , .. , .. , .. - ,

    . ..,

    , E0~10-100 , [1,2]. - 3 3f( , , t)d ddW r V= V r 3d r , 3d V V t . - - , - f( , , t)V r , -.

    E0~10-100 ,

    , . , . - , (., , [3]), .

    . - , - - . (0, t) - ( ), d + [4]

    ddW

    =

    2

    2

    2exp2)( , tD = 42 , (1)

    D - .

    1. .

    -

    , . -

  • 32

    I0~10. , [5]:

    VF 4 24

    mVZenA = , V/ VV = , 62ln

    0

    2

    =

    ZImV

    . . (ee-), -, ee- - : , - .

    :

    2 3sin2cos

    dd a = , 2sinE E = ,

    - , [4]. a=e2/E ~ 0.001..- , E=mV2/2, m , e , Z - - , - ( V ) ( V ) .

    -

    . t = 0 0 0(0,0, )V=V - (.2) (x,y,z) z ( , ). t, - , , ,

    ( )2 3

    2 4

    1 ( ), ( )10 8 A

    m Vt t V t Vn Z e V

    = =

    2. .

    , , ,

    ( ) 20

    4 3ln 1mVV

    Z I = +

    (t, V, r) = (t, Vx, Vy,Vz , x, y, z),

    . : (0, 0, 0,V0, 0, 0, 0).

    , :

  • 33

    (t, V, r) (t+t, V, r) ( ), , -

    (. .3): sin cos , sin sin , cosx y zV V V V V V = = =

    t , . , - . - ()

  • 34

    ( ) 2 22

    11 exp , ln1

    = =

    ,

    (0,1). : . -, : t+= Vrr .

    f(V,r) - (.2). ls/10, ls -, V0/10. - , a, - . fa , - , . ~1.

    -

    , - , . - , . - , , - , . - ( - ) .

    4. .

    -

    web- - (.4).

  • 35

    , , . GRID - 16 2- . 2- 4- , - : . - . - : , -, GRID - . 108 109. - .

    " " ( 2.1.1/2637).

    1. . . -

    . 2. .., .., .., , 172, 155(2002) 3. .., . . . 4. .., .., .., .. -

    - -. . . 2.

    5. .., .., . . .

  • 36

    ..

    ( ), ,

    1. : -

    () . - - . [1]. - - ().

    . - - , [2]. . [2-6], - . - [7] - . : ) , ) - , ) - ( ). - - (-).

    , , [3].

    , - - . , - , . - i- (i=1, 2,...) - mi n i 3mi +1 [2].

    . [2-4, 6] - : - m, 1) - - ; 2) - - - -. - , , - n=3m+1. [2-4] .

  • 37

    [6] - , , , 3m+1. - ( -1), - .

    . - - s - r - - - s [7]. - s - r - . - .

    - , - ( ) , - - . - . - - i - j ( ij) [7] , i, - j, . i- ij , . - - ().

    - : j- - mj, - i- - - --.

    . - . N - . i- - maxim

    minim ,

    , - . - , , - , , .

    2. , , -

    [9] v w. -

    :

  • 38

    1) , , , 2) [6] -

    , 3)

    [6] , 4) , . (.1) 1. , . 2. v', , -

    . 3. v'', v' w,

    , w. v' 2.

    4. , - . .

    5.

    , .

    .

    , .. . - , , ni ni-1 ni-1, ni i- -. . - , , . -, , , - , . , - . , , - .

    1.

    v

    v'

    v''

    w

  • 39

    , .. , -. ( ) - .

    - , ( ). , - .

    . - .

    , .2. - m1=m2=1, 4 , .

    3, 7, 8, 9.

    - 1 : 37, 38, 39, 723, 78, 7119, 83, 87, 89, 93, 98, 9117.

    4, 5, 6, 10. - 2 .

    : 1, 4, 5, 6. 2

    1 2

    3

    4

    5

    67

    8

    9

    10

    11

    2.

  • 40

    : 1104, 15, 16, 4101, 45, 46, 51, 54, 56, 61, 64, 65.

    , . : 1) -

    , - -, 2) , 3) .

    1. .. -

    . . 2. .., .., .. -

    - // . 1989. 5. . 3-18.

    3. Pease M., Shostak R., Lamport L. Reaching agreement in the presence of faults // J. Ass. Comput. Mach. 1980. V. 27. 2. P. 228-237.

    4. Lamport L., Shostak R., Pease M. The byzantine generals problem // ACM Trans. Progr. Lang. and Syst. 1982. V. 4. 3. P. 382-401.

    5. Barborak M., Malek M. The consensus problem in fault-tolerant computing // ACM Computing Surveys. June 1993. V. 25. 2. P. 171-220.

    6. .., .., .. - // . 2003. 5. . 190-199.

    7. .. - - // . 2009. 2. . 171-189.

    8. ., ., . -. .:, 1979.-536 .

    9. .., .. : . . .: - , 1992.-264 .

  • 41

    Grid

    .. 1), .. 2), .. 2), .. 2), .. 2) 1)

    2)

    - , Grid-. 2005 2007 .. -, , .NET Framework. - , [1, 2, 3]. - - ( 130 ) - 2 9 . - . -, , Grid (), - .

    -. . , 1 . . - . . - . -:

    1) ; 2) ;

    3) ; 4) , ; 5) . ,

    . : , ; ; , ; . , - ,

    ...: [email protected] ...: [email protected] ...:[email protected] ...: [email protected] ...: [email protected]

  • 42

    . - - . , - , .

    , :

    1) ; 2) ; 3) ; 4) - ; 5) ; 6) ; 7) . - - Alchemi,

    - .NET-, , .

    , - . , , -, . - Alchemi. , . - -, - . . -, , . , Alchemi Manager, . , . . - , , - Alchemi , - LKH. , , . , - Alchemi. - . - , - , - 1 1,5 . , ( 1,5 2 ) , , . - , . - -

  • 43

    , . , , , , , ( 10 ). , , , - . . , -, , - , - - , , ( ), . : - .

    1. .., .. -

    Smart Truck. : , 2008. 50200800675. 2. ..

    / .. , .. // Microsoft : . IV . . ., . . ., 2-3 . 2007: , : [. .] / . . - (. . -) [ .]. - ., 2007. - C. 169-170.

    3. .. - / .. , .. // - (AIS`07). (CAD-2007): . . .-. -, , 3-10 . 2007 / . - " - -" [ .]. - ., 2007. - .III. - C. 75-77.

    4. Alchemi. [ ]. : www.alchemi.net 5. GPE. [ ]. : http://gpe4gtk.sf.net 6. Grid over Internet. [ ]. :

    http://sourceforge.net/forum/message.php?msg_id=5676056 7. The Traveling Salesman Problem using genetic algorithm. [ ].

    : http://www.lalena.com/AI/Tsp/

  • 44

    .. , .. , .. ...

    -

    , [1 6] . - . - . ( ), . - ( ), . - , - . - ..., -.

    = ( y ) =m i n { ( y ) : yD , g j ( y ) 0 , 1 j m } , D= { yR N : a i y i b i , 1 i N } , (1)

    ( y ) ( g m + 1 ( y ) ) g j ( y ) , 1 j m , L j , 1 j m+ 1 ,

    g j ( y 1 ) g j ( y 2 ) L j | y 1 y 2 | , 1 j m+ 1 , y 1 , y 2D . y ( x ) , [ 0 , 1 ] N - D

    D= { yR N : 2 1 y i 2 1 , 1 i N }= { y ( x ) : 0 x 1 } , :

    ( y ( x ) ) = m i n { ( y ( x ) ) : x [ 0 , 1 ] , g j ( y ( x ) ) 0 , 1 j m } . , (. [1]), ..

    | g j ( y ( x ) ) g j ( y ( x ) ) | K j | x x | 1 / N , x , x [ 0 , 1 ] , 1 j m + 1 , N , K j L j K j 4 L j N . - [3], [5].

    , - . , x [ 0 , 1 ] , y ( x )R N

  • 45

    2 N . N - y ' , y ' ' - x ' , x ' ' [0,1]. , ( 2 N ) , , , - .

    -

    Y L ( x ) = { y 1 ( x ) , , y L ( x ) } (2) y ( x ) (. [2], [4]). y i ( x ) Y L ( x ) D . y ' , y ' ' , , x ' , x ' ' y i ( x ) .

    -

    ( -) , -, , . -, - L ( , ) . - , log2(1) . , 103 - , 10.

    , N - , . - , - . y i ( x ) , - y ' , y ' ' , [0,1], - x ' , x ' ' .

    , , - -.

    . 1 , N = 2 . N - -, , .

  • 46

    1.

    , N-

    , 2 N . , . - /2 .

    , 2

    )1(2 = NNCN , N(N1). , , N(N1)+1 - N- . - , - [1], -. ( 2 N ) .

    Y L ( x ) = { y 1 ( x ) , , y L ( x ) }

    m i n { ( y l ( x ) ) : x [ 0 , 1 ] , g j ( y l ( x ) ) 0 , 1 j m } , 1 l L . ,

    z = g ( y ' ) , y ' = y i ( x ' ) g ( y ) i - - z= g ( y ' ) , y ' = y s ( x ' ' ) s - g ( y ) . - (1) - L (3) [0,1]. - . - L , . - , .

    [7].

    -

    ( y ) = ( )=

    N

    iii yy

    1

    22 )18cos( , 1 . 5 y i 1 . 5 , 1 i N , N= 6 , ( y * ) = N y * = 0 .

  • 47

    m=12, r =2.3, =0.05 . L=30 , . - - 100 . , , 257984 ( 8.5 ), - 12015 ( 1.2 -). 21.5 , 7 .

    ,

    (-) . -, - -. , . GloblExpert, ....

    ( - , 02.740.11.5018).

    1. .. . .: , 1990. 2. .. // . . . . . 1991. .31. 8. . 1173 1185. 3. .., .. - - // -. .: , 1999. . 273 288. 4. Strongin R.G., Sergeyev Ya.D. Global optimization with non-convex constraints. Sequen-tial and parallel algorithms. Kluwer Academic Publishers, Dordrecht, 2000. 5. .., .. - // . . . . . 2002., .42, 9. C. 13381350. 6. .. . : - . -, 2005. 7. .., .., .. - . .

  • 48

    .. , .. , .. ...

    - , 2100. - , - . -, , . (parallel or distributed genetic algorithms). - , . , - . .

    : (1) -

    ; (2) , . master-slave. - - .

    master-slave. - . , , , , , , - . - . - , . - master-slave. .

    , - . , , , , - . , .

    . , Grefenstette, . , ( ),

  • 49

    , . . - : . .

    () , .

    , - . , -, , , -, .

    - , . , -. , . , - . : (1) - , ; (2) - .

    , : (1) , ; (2) .

    . . , - . , , . , -, -. : . -, - .

    , , - , - .

    , -

    . , , master-slave ..

    , - , .

    , .

  • 50

    -

    .. , ..

    [email protected]

    - , [1, 2].

    - - .

    , - - . - , -.

    - - . , , : (), , , - . , - , - .

    A99+8%Al3Ti, L=0,15 , l=0,3 , D=40 , d=20 (. . 1).

    1. .

  • 51

    , - () 300 (. . 1).

    , V S wk , Sk. = kwV1 - wk )1(ijF ( ), - V2 = VV1 )2(ijF ()

    = kSS12 , - (. . 2). - .

    ) ) 2. ();

    A99+8%Al3Ti.

    .2. Al3Ti 6 - 20. -

    Al3Ti, 99. , , -

    [3]. , ,

    , . - ( -) X - :

    0)()(, =+ rXr jjij , (1) - :

  • 52

    [ ])()(21)( ,, rurur ijjiij += (2)

    Sk . , , - :

    [ ] [ ] 0,0 )()( == ijij un (3) , . 1. -

    P 50-80 . -.

    - - .

    Al3Ti , 10 .

    , ANSYS. ANSYS - 6 .

    - , .

    8 , . - 4160576 1379294 , 0,5 .

    - - .

    . 3 =60 . - -15 -61 , - , .

    Y , - -36 -64 , X -42 73 .

    ) ) 3. :

    ) - ; ) - .

  • 53

    . 3 -. , 0,001, - - , 1-4 . .

    - .

    . - - A99+8%Al3Ti, , . - - , .

    1. .. .-.: , 1975.-415 . 2. .. .- .: , 1977.-

    400 . 3. .., .., .. -

    . / . .. . .: . , 1997. 288 .

  • 54

    - ..

    [email protected]

    .. , .. . ..

    [email protected], [email protected] -

    . - , . , - , . - .

    - : .

    - . , . 0,01 . , - . -1000 1000 . , , , . - . - , , . -, , . , , , , .

    . , -. (. [1]) .

    , , - , - . - , , . , . , - , , - , , , -. . - . - , . ,

  • 55

    , , . - , . , - , , - , . , - , .

    , . , - , - . . .

    , . . . : . , . , . , . , .

    . , . - . - .

    . , , . . n- n+1. , , . , , , . . . -. - Optimal Brain Surgery . - . , . - , , - . , - , , , , .

    , . , , , . -

  • 56

    . , , . , -, - , . , -, , , . - , . - -. - .

    - .

    . , , , . , , - , , , , . . . -, . - , . . - .

    . - . GANS [2], - C# .NET 2.0. .NET C# , C# .NET , -, . , - [3], , Microsoft .NET , C++ - gcc, 3.9 . , [3] , , , Division-intensive loop Polynomial evaluation, - .NET , gcc, 2%--20%. , .NET . , GANS - , , .

    - . - . , . , - , 0 1, , , -. , -

  • 57

    , 50 , -. , - . , - , . , - , . , - , 0,6 , - 5%. - "- ". , , , . -, , . , 1200, 60 1, 1141 . , 1 . 0,1 . 10 , - .

    , - . - 8-12 , 100 .

    , - , . , - . , , .

    . - - , , . - . , -, , , , -, , . GRID- - , : , , . , . - , , , - , .

    1. .. . . -2008. .10

    . .152. 2. .. , .. . -

    GANS. VI . . 4. - . -: 2009. C. 151-157.

    3. Numeric performance in C, C# and Java. Peter Sestoft. 2009. www.itu.dk/~sestoft/papers/numericperformance.pdf.

  • 58

    Hadoop .. , .. , ..

    . .. [email protected]

    -

    Hadoop [1]. -

    , [2]. [2] -, - , ( ). - MapReduce [3] - .

    MapReduce - . .

    -

    . ( ). -. - . TFxIDF, , - [4]. TFxIDF , .

    . , - - 21 . 36 . MapReduce.

    MapReduce MapReduce Google -

    . MapReduce , - Map Reduce. Map - /. Reduce /, map ( / ).

    Map Reduce :

  • 59

    map(String key, String value): // key: // value: // ResultVector: ResultVector intermediateResult; for each word w in value: { stem = ExtractStem(w);

    // intermediateResult.add(stem);

    } EmitIntermediate(key, intermediateResult); reduce(String key, Iterator values): // key: // values: Map ResultVector result; PriorityQueue priorityqueue; int i=0; for each v in values: { intermediateResult = v; for each w in intermediateResult { priorityqueue.add(w,countWeight(w));

    } // while (i

  • 60

    1. MapReduce.

    , . .1 MapReduce.

    , .

    : Intel Core(TM) 2 Duo CPU T2310 1.46 , 2 ; AMD Turion(TM) Mobile technology ML-34 1.8 , 1 ; AMD Turion(TM) Mobile technology TL-56 1.8 , 2 ; 50 Mbps; Microsoft Windows XP; Hadoop 19.2.

    , - 1 21 . 36 . - MapReduce, Hadoop, - , , - 2 . Hadoop - NameNode, JobTracker, DataNode TaskTracker. Hadoop . - 3 . . .

    .

    .

    .

    1

    21 . 36 . 10 . 47 . 7 . 45 .

    , -

    -

    -

    Map-

    Map-

    Map-

    Map-

    -

    Reduce-

    Reduce-

    Reduce- -

    Map-

  • 61

    . , MapReduce ( -), -.

    MapReduce

    . - Hadoop, MapReduce. Hadoop . , Hadoop , .

    1. Apache Hadoop project. : http://hadoop.apache.org/ 2. . . , . . . -

    // - . .. , 2009, 4, . 165171.

    3. Jeffrey Dean, Sanjay Ghemawat. MapReduce: simplified data processing on large clusters // Communications of the ACM. 2008. V. 51 P. 107-113.

    4. Daniel Kelleher, Saturnino Luz. Automatic Hypertext Keyphrase Detection // Proceedings of the Nineteenth International Joint Conference on Artificial Intelligence, Edinburgh, Scotland, UK. 2005. P. 1608-1610.

  • 62

    .. , .. , ..

    [email protected], [email protected], [email protected]

    , -, , , , -. , - , .

    ( ) , - .

    , . - -. , - , . , , . , -, () .

    . , - n . . - .

    - , - . - . , - -. - , - .

    , -. , - . .

    im , ni ,1= i - , - . - n - .

  • 63

    =

    =n

    iimD

    1

    . -

    . [ ]D,1 . p -. . , ,

    1+

    =pDc . , p -

    . , ( -

    ) - , . [1]

    1. , ..

    / .. , .. , .. , .. . - : , . , 22-27.09.2008, .297-299.

  • 64

    .. , .. ,

    -, , . () - . - IBM, :

    16 IBM BladeCenter H. 224 IBM Blade HS21. 4 IBM x3650. 2 IBM x3950. IBM DS3400 Ethernet 1

    Gb InfiniBand 4X. , :

    1 IBM BladeCenter H. 14 IBM Blade HS21. 1 IBM x3650. IBM DS3400 Ethernet 1Gb. SLES

    10, IBM Cluster System Management IBM General Parallel File System. - .

    :

    ; ; -

    ; . . . IBM

    CSM (Cluster System Management). CSM - , , - , Linux. , -. CSM . - , - .

    , xCAT (Extreme Cloud Administration Tool). ,

  • 65

    IBM CSM SLES 11. xCAT - Open Source, - xCAT -.

    . -

    . - , , Ganglia Nagios.

    Ganglia - . . . , - , - , -.

    Nagios , Ganglia, - . Nagios , - , - ( -) . , Ganglia:

    ( , , , , - );

    .

    (Ganglia Nagios) , - , .

    .

    - . :

    ; ; . .

    Torque Maui. - , - RDIG, . - , Open Source.

    Torque - , , , - . -, , .

  • 66

    GPU -

    .. , .. , .. . .. ,

    ,

    E0~10-100 , [1,2]. [1,2]. - (GPU) [1,2] .

    G80 NVIDIA, . CUDA, , GPU . GPU - . . - SIMD ( , ), - NVIDIA SIMT ( , ). , . -, - , -, .

    - GPU , -

    , - . , - , , - . , -, (warps) (blocks) , (grids). , - , - , -

    , , - , (kernel) - GPU [3, . 47]. GPU , , .

    , - :

  • 67

    1. , .

    2. 2 ( )

    3. ( 2- ).

    4. , .

    , . - . GPU. - , - .

    -. , , - -, - , . - . GPU - , GPU. - , CUDA SDK.

    , : CPU, CPU G80 G90 NVIDIA. -. - - GPU - CUDA.

    1. .., .., .., .. .

    - . . . 2.

    2. .., .., .., .. . .

    3. NVIDIA CUDA C Programming Best Practices Guide

  • 68

    .. , .. , .. . ..

    . , - , , 10-15 . -, . [1]. - , , [2]. , - , , .

    , - () .

    () , , - . - , . , - , - .

    ; - , ; - (); ; Security Map-Point Cluster.

    -

    . , , . .

    , - - [3]. ( ) mn=1835 .

    [4] , - ( ). , -, . , .

    . 6 000 300 000 -

  • 69

    10 . AMD Turion 64x2, 2 , 2 . , .

    - . , - 34 48. 15 -. ( ) .

    , - . . , . .

    - , , - ( ()) [4].

    , (. 1).

    1.

    , , (

    ) , . - , . . : N, N - ;

    - , - .

    , - , , - .

    . - - MPIH-1.

  • 70

    , - n , , n [6].

    - , - [7].

    :

    , ; . , [5]. .

    . 1. Q, q. , .

    2. q Q. , .

    3. Q', q. Q' q , - q - Q'. , .

    4. q Q, , .

    5. , q Q. , Q. q .

    6. Q' , , q .

    7. q Q'. 8. , Q' . , Q' . , Q', . . 700700 2

    , 71, N = 28.

    , Gigabit Ethernet D-LINK DGS-1016D. Intel(R) Core(TM)2 CPU 1,87 3 .

    4. , 50 .

    3 : 1. ; 2. ; 3. N 1.

    4 : 1. ; 2. ;

  • 71

    3. . - N 1;

    4. . 1.

    - , .

    4-

    8-

    12-

    1 53,2 36,5 41,4 34,6 2 1414,5 374,6 211 158,7 3 3230,2 858,6 476,9 358,6 1 30,2 27,5 25,4 37,6 2 123,5 90,6 43 54,7 3 1027,2 224,6 164,6 147,9

    4 1545 298 184 147 56,5 32,1 37,6 34,3 3,6 2,4 1,7 1,4

    , -

    , . - . , 1- - 1- 2- , - . .

    Security Map-Point Cluster Security Map-Point Cluster

    . (. 2).

    2.

  • 72

    SQL- . SQL- -, . SQL- .

    : TCP/IP ; SQL- - .

    - , Host- - .

    . :

    1. SQL- . 2. ,

    SQL- . 3. SQL- , -

    . 4. SQL- -

    , .

    , , .

    1. " ". 2. " ". 3. .., .., .. -

    // . .. . 2001. 1. . 42-47.

    4. .. : - (-), 2005. .237-272.

    5. 8- . - (HPC-2008). , 17-19, 2008. - : . , 2008. . 216-221.

    6. .. , .. , .. . // /. . .. -. . . 3. : - , 2007. .96-108.

    7. .., .., .., .. - / . .. : - . . . -, 2008, .107-127.

  • 73

    MPI- ..

    , - 2/3 , , [3]. , - ( ) - , - , . - , MPI [1], . -, - -. [2]:

    - MPI ( MPI-, MPI-, - /);

    - ( MPI-, , );

    - ( -/ / , - ).

    , MPI- . - , - :

    - (TotalView, Distributed Debugging Tool); - (MPI-Spin); - (Distributed Virtual Machine); - (Intel Trace Analyzer and Collec-

    tor, MARMOT). ,

    , - ( , ) -; -, ; , - .

    MARMOT [4] Intel Trace Analyzer and Collector -. -; -

  • 74

    MARMOT -- , ; -/ ..

    -. MARMOT , . - , - - . ITAC MPI- - , MPI- (, ), ( - MPI-) . , - . , - -, . MPI- , - ITAC, .

    , , MPI-. - MARMOT ITAC. - .1. , MPI-, -, , --, , MPI-, . -, - MPI- -. - - {x, y}, 0

  • 75

    ; ; ; ; -/ ; ; . , - - -, . , -. , - , rcv irc ( MPI_Recv MPI_Irecv); - - , - - MPI_ANY_SOURCE; - MPI_ANY_TAG. - , , , - . , -. . . , -, -, (, MPI_ANY_SOURCE, 2 -), ( 2 ).

    , - MPI-. , - - , ; - .. , - . - , MPI- , - . , - MPI-, - -, .

    , MPI, -, , - . - . , . , - .

  • 76

    1. .

    -

    , - . - , , - - , MPI- . - , - , . - , - MPI- -. , , .

    1. . ., . .

    . : , 2003. - 233 . 2. . ., . . MPI- //

    . IT-. : , 2008. . 236-241.

    3. . , ., . : Model Checking. .: -, 2002. 416 .

    4. Krammer B., Mueller M., Resch M. MPI Application Development Using the Analysis Tool MARMOT // Lecture Notes in Computer Science. Vol. 3038. P. 464 471. Springer Berlin, 2004.

  • 77

    ANSYS

    ..

    ()

    : ; - ; - . : . , , , B-.

    - , : , , , , . - , , , , - . , , - , - , . . . - , . , - , - . , , , - .

    - - : CATIA, UNIGRAPHICS, Pro/ENGINEER, I-Deas, : ; - - ; -, ; .

    Pro/SURFACE Pro/ENGINEER. - - . , , , - . , , , - , . Pro/SURFACE , , , , , , - , , , . Pro/SURFACE -

  • 78

    , NURBS, , , - .

    : : ; - ; ; - , -; - - ; - ; - .

    - : ; , ; ; ; , ; -. , , - .

    - .

    : ( ): 0div =+

    Vt

    ; (1)

    ( ):

    i

    iij

    i

    j

    j

    iijij x

    uxu

    xu

    P +

    +

    += ; (2) :

    ( ) ( ) ( )tPQEWTKTCTC

    t VkV

    pp +++++=+

    000 divdiv gradV . (3)

    (1)(3) . - -:

    1. . 2. . 3. . 4. : ) -

    ( ) ; ) ( ) ().

    ANSYS/FLOTRAN CAE- ANSYS - - -, : , , , - , . ,

  • 79

    , . -.

    , , . - . - : , , - .

    : , -

    ; , . - , .

    , - , - . , - . , , .

    , - . - , . - , , , -.

    , . , , . , , .

    - :

    =

    T

    z

    y

    x

    z

    y

    x

    xxTZTYTX

    Tz

    Ty

    Tx

    zzzzyzx

    yyzyyyx

    xxzxyxx

    F

    FFF

    TP

    VVV

    KKKKCCC

    CKKKCKKKCKKK

    00

    00000

    .

    Vx, Vy, Vz, P, T - : , -. - ( ) . - , - . , F , , - .

  • 80

    - . , - k . - - . - ANSYS/FLOTRAN , - . - , - . . .

    . . - , , - , , - . - . - , , .

    . - , , . , , - .

    ANSYS FLOTRAN - . - . - . - , , .

    , . .

    , -

    , -, ENKE, ENDS. - , , - .

    - AN-SYS/FLOTRAN ANSYS -.

    . 1. ENKE :

  • 81

    2

    21ENKE v= .

    - ; v - . 2. :

    ENKE= 2 . = 2 ENKE

    , ; texp .

    3. : 416,2735,0

    exp51062,3 = tL .

    4. :

    ( )( )[ ] ,111

    =

    V

    tQfe

    Hb LLHf

    V - ; H - ; Q - (/); Time - (); L - - (f) (e) .

    5. NIH:

    ( ) ( )( )[ ] =

    VtQ

    fe LLtQVHHNIH 1111 ,

    H - ; - ; V - ; Q - (/); t - (); L - (f) (e).

  • 82

    nVidia CUDA

    .. , ,

    -

    . - - . , - - - , .

    Graphic Processor Unit ( GPU) - . - . GPU GPU , , CPU, , - General Purpose Computation On Graphic Processor Unit ( GPGPU).

    , - 5-20 . , -, .

    , - GPGPU -

    -

    . - . - - .

    - , - .

    . ( ), .

    , () - . -

    +

    vuvduIvuI

    ,

    221 )),(),((

    (1)

  • 83

    +vu

    vduIvuI,

    21 ),(),(

    (2)

    -. - : Ik(u,v) (u,v) k- , d - , .

    GPU. , - CUDA - . .

    1. .

    , -

    . , . , , , - .

    (DSI) , . - IR(x, i) IL(x, i) (x, i) , d . DSI

    DSIiL(x, d)=|IL(x, i)IR(xd, i)|,

    (3)

    dmax d dmax, 0 x+d < N, N ..

    GPU.

  • 84

    - , .. .

    , , , .

    - . - - .

    2. , .

    . , -

    , : , , , - .

    , - . , , .. - , -. , -, , , . , - CPU .

    - .. . , . - . , - - .

    . CUDA

    (Single Instruction Multiple Data, SIMD). SIMD -

  • 85

    , . CUDA , , - , . . .

    Host Device ( ). GPU host CPU. GPU, CPU .

    . - , - . , , .

    .. -, , - . - , , - .

    - GPU CPU -:

    CPU: , , - , ;

    GPU: CUDA, CUDA, -, .

    -, .

    . - - - .

    . - : , . - - (-) . , .

    . , . - .

  • 86

    - GPU. , - CUDA - . .

    NVIDIA CUDA .

    , -

    , .3.

    3. .

    450/375 -

    . - 8- 0.008-0.012 , - .

    , NVIDIA -

    - CUDA - . CUDA NVIDIA, - GPU, , - , , - .

    - .

  • 87

    GRID : 5 .. 1, .. 1,2, .. 1, .. 1, .. 1

    1 , ., 2 , .,

    1. -

    . - GRID .

    ( , , , , ..) - . , - 109 . GRID .

    - . -, "open source" (CPMD, Dalton-2, GAMESS-US, NAMD, ABINIT .), (Gaussian-98,-03, Mopac2002, MolPro). - - -- . 2004 21 - - GRID - "- " - - , - -.

    -: 1) - ( ) GRID ; 2) GRID, , - - .

    2. - -

    , , , - , .

    - - : 1) - ; 2) , .

  • 88

    , , - ( 10-100 , ), . ( 103 ) , .

    , .. - - , -. Gaussian : - . GAMESS - . - , .

    3. GRID -

    GRID - ( ), , , GAMESS, Gaussian, Dalton2, CPMD , , .

    - :

    1. GAMESS-US (http://www.msg.ameslab.gov/GAMESS/); 2. Dalton-2 (http://www.kjemi.uio.no/software/dalton/dalton.htm); 3. CPMD (http://www.cpmd.org/); 4. NAMD (http://www.ks.uiuc.edu/Research/namd/); 5. Gaussian03 (http://www.gaussian.com/); 6. ,

    , ( ) .

    - - , - .

    3.1 2004-2005

    , , - , , . - , - , - -. Condor X-Com. Perl - . , -

  • 89

    . 30 ~4200 . 5-6 .

    Condor (http://www.cs.wisc.edu/condor) - ( -) - . Condor - Condor , , - . - Condor. Condor - . - : , , - .

    X-Com ( , http://x-com.parallel.ru) . - X-Com , Condor. X-Com - ( ) . . - . WWW-, WWW-, , - .

    40 - Condor 400 X-Com , , - .

    . - ( , 104, 107 ). - middleware. , , . - ( ) - , , ( , - , , ..). . ( ) ( -) , gLite Unicore -

  • 90

    .

    3.2 ,

    GRID, CERN gLite (http://glite.web.cern.ch/gLite, LCG-2), - EGEE-RDIG (http://www.egee-rdig.ru). 2008 - (http://skif-grid.botik.ru) Unicore (http://www.unicore.eu).

    - gLite Unicore. - , , - , - . (SMP, , MPI) . GRID , - , - RGSTEST RDIG ( RDIG) -. - ( ). - (, SMP+MPI) ( Dalton-2 CPMD). - . - (SMP, , MPI-1,2) ( PBS ).

    , - , . - - . - , - middleware - . , ( GAMESS, Gaussian, Dalton, CPMD, NAMD) . , - , , , - . GRID , , MPI-2 -, .. . , - , - mpd GRID - .

  • 91

    , , - .

    3.3 WWW Grid Enabled Chemical Physics (GEP)

    GRID , GRID . GRID -, GRID Web . , web- GRID .

    , - GRID. GRID , (, ), - GRID . , , - GRID . . - , , web - .

    Grid Enabled Chemical Physics (GECP, http://grid.icp.ac.ru) - Web-: 1. - GAMESS

    ab initio; 2. ,

    , (Data Parallel). (-

    ), , ( ) , . web- - .

    4. GRID -

    . : GRID

    ( Web-),

    ( ) ;

    ( middleware) , gLite Unicore, .

  • 92

    .. , .. . .

    1.

    (y)

    (y*) = min{(y): yD}, (1) D , N- .

    ( ) . - , - (., , [1-3]):

    Dymin [ ]1?11min)( bayy = [ ]NNN bay ?min )( ,...1 Nyy

    (2)

    (2) (1) :

    * =Dymin )(y = [ ]111 ,minbay )(

    ~11 y , (3)

    ),...,()(~ 1 iiii yyy = = [ ]111 ,min +++ iii bay ),,...,( 111 ++ iii yyy Ni

  • 93

    , Fl , N (N ), (4), , -, Fl ( ).

    Fl (6) ( N) ( -). , .. .

    2. -

    . , - , - . , - , - (.. ) ..

    - - ., , [2, 4]. - , - [3-4]. - .

    - . p>1 , . , yD, (y) (1). - -, - .

    , p>1 - D k=0. - ( ) -:

    1. . k>0 y1, y2,, yk, - yk+1, yk+2,, yk+p, - . , k xk+p+1 k+p+1 .

    2. . Fl - 2.1 2.4.

    2.1 . x1, x2,, xs(j), ( ) aj bj -

  • 94

    fj(x)Fl :

    jssj bxxxxxa == 1210 ... . 2.2. . Is -

    sii Iixx ),,( 1 , -, , [aj,bj].

    2.3. . - , R(i), - .

    2.4. . stt Itxx ),,( 1 , R(t), ..

    }:)(max{)( sIiiRtR = (7) ( R(fj)= R(t) fj(x)Fl).

    3. . xk+p+1 - ),( 1 tt xx f(x)Fl - R(f)

    ),()( 11

    ttpk xxtsx ++ = , }:)(max{)( ljj FffRfR = . (8)

    ( - ), (3)-(5) 2. xk+p+1 yk+p+1 - . - , - .

    4. . , 3 ),( 1 tt xx R(t), >0, ..

    1tt xx . : ""

    Fl; "" )(

    ~11 y .

    - - (7) (8). - - , , , - . , , -

  • 95

    , , - [3].

    3.

    - -. ( ) , , . - - - . - - - .

    Fl (6): L = { 1, 2, , l }, (9)

    p>1 . - - L :

    = { 1, 2, , p }, (10) i = { js : js L, 1s li }, 1ip,

    iL j : ij, i,j ij ={} i, 1ip, , i, 1ip.

    - . -, - . - , - .

    LN = { ij : ij L, 1j lN }. (11) ; -

    (10):

    N = { 0, 1, 2, , p }, (12) 0= L \ LN, i = { js : js LN, 1sli }, 1ip,

    iL j : ij, i,j ij ={}. (12) i, 1ip, ,

    i, 1ip. 0 - , . - - - , -

  • 96

    - .

    - . - -:

    ))*18cos((),...,( 21

    21 i

    N

    iiN yyyyf =

    =, Niyi ,..,1]5.1;0.1[ =

    1.

    .

    1. .. . . .: , 1978. 2. R.G. Strongin , Y.D. Sergeyev Global Optimization with non-convex constraints: Sequen-

    tial and parallel algorithms. Kluwer Academic Publishers, Dordrecht, 2000. 3. .. , .. -

    . .: - , 2007. 4. Y.D. Sergeyev , V.A. Grishagin Parallel asynchronous global search and the nested opti-

    mization scheme. // J. Comput. Anal. Appl. 2001. Vol. 3, 2. P. 123-145. 5. .. . .: -

    ; . , 2007.

  • 97

    - ..

    . ..

    - , [1-2]. . . - , , . ( ), ( - ) . , .

    ( y * ) =m i n { ( y ) : yD , g j ( y ) 0 , 1 j m } . (1) D N- -

    D= { yR N : 2 1 y i 2 1 , 1 i N } .

    , - . - , ( g m + 1 ) g j ( y ) , 1 j m , L j , 1 j m + 1 ,

    g j ( y 1 ) g j ( y 2 ) L j | | y 1 y 2 | | , 1 j m + 1 , y 1 , y 2D . .

    y ( x ) , , - D [ 0 , 1 ]

    ( y ( x * ) ) = m i n { ( y ( x ) ) : x [ 0 , 1 ] , g j ( y ( x ) ) 0 , 1 j m } .

    - , - (. [2]), ..

    | g j ( y ( x ) ) g j ( y ( x ) ) | K j | x x | 1 / N , x , x [ 0 , 1 ] , 1 j m + 1 ,

    N , K j L j K j 4 L j N . [2]. , -

  • 98

    , 2 m ( m ).

    , - . , x [ 0 , 1 ] , y ( x )R N 2 N . , N - y ' , y ' ' - x ' , x ' ' [0,1].

    -

    Y L ( x ) = { y 1 ( x ) , , y L ( x ) } y ( x ) (. [1]). y i ( x ) Y L ( x ) D . y ' , y ' ' , , x ' , x ' ' y i ( x ) .

    - ( -) , .

    , N - , . - , - . y i ( x ) , - y ' , y ' ' , [0,1], - x ' , x ' ' .

    , , - -.

    1. .

    . 1 ,

    N = 2 . N - -, ,

  • 99

    . - , N- , - 2 N .

    Y L ( x ) = { y 1 ( x ) , , y L ( x ) }

    m i n { ( y l ( x ) ) : x [ 0 , 1 ] , g j ( y l ( x ) ) 0 , 1 j m } , 1 l L . (2) ,

    z = g ( y ' ) , y ' = y i ( x ' ) g ( y ) i - - z= g ( y ' ) , y ' = y s ( x ' ' ) s - g ( y ) .

    L (2) [ 0 , 1 ] ( . [ 3 ] ) . . - x k , , ( x k 1 , , x k L ). x k [ 0 , 1 ] , s- , :

    1. y k = y s ( x k ) y s ( x ) . 2.

    y k ( y k ). 3. g 1 ( y k ) , , g ( y k ) , m -

    g j ( y k ) 0 , 1 j < , g ( y k ) > 0 , m .

    y k . -, y k , - , =m+ 1 .

    y s ( x k ) , = ( x k ) , z k= g ( y s ( x k ) ) , x k .

    4. x k l [ 0 , 1 ] , 1 l L , y k , , y kD , L

    x k 1 , , x k L ,

    ( x k 1 ) == ( x k L ) = ( x k ) , g ( y 1 ( x k 1 ) ) == g ( y L ( x k L ) ) = z k .

    y k . , -

    , . - L , : , - , - 1, .

    -

    ( y ) = ( )=

    N

    iii yy

    1

    22 )18cos( , 1 . 5 y i 1 . 5 , 1 i N , N= 6 ,

  • 100

    ( y * ) = N y * = 0 .

    m=10, r =2.0, =0.05 . L=30 , . , - , 173116 , 8535 ( - ). 20.28, - 7.48.

    ( 07-01-00467-)

    ( -4694.2008.9).

    1. ..

    // . . . . . 1991. .31. 8. . 11731185.

    2. Strongin R.G., Sergeyev Ya.D. Global optimization with non-convex constraints. Sequen-tial and parallel algorithms. Kluwer Academic Publishers, Dordrecht, 2000.

    3. .. . - .: - ; . , 2007.

  • 101

    .. , .. , .. [email protected], [email protected], [email protected]

    -

    . , , - . , - :

    1. [1]. - -. , - .

    2. [2]. - , ( - 3D-).

    3. [3]. , .

    , , . , , , - .

    , [4]. - , .

    , [7]. . , : - . , . -, p . *p , .. , (, ..) .

    . , - , , , ( ). , , , .

  • 102

    , , , . , , - .

    - . - .

    -

    : p - . p . 0=p , =p . wp = w. : 1) 0)(: 11 =pwp , .. p, - ; 2) )1()(: 22 wpwp = , .. p < 1, ; 3) 1=p , .. .

    ( - ) Z1, Z2, Z. , - Z . p1 p2 , - , .

    ( ) -

    . () [5]. - , - , .

    (. [6]). , - ( , - te ). , - 1 ( ) f 0 ( ) - 1-f. ..., - . , , 0.5, - 10 . - f :

    [ ]

    > . 1' + iii ddd . 1' += ii dd ,

    ( ) S(p , )1,...,2,1( i ( )S( . ii dd =' , i

  • 159

    .. ,

    . , , , .

    , .. , - .

    - . (, , ..), . .

    , - . - - , - ( ) [1].

    - , .

    , - , - , RAS (Reliability, Availability, Serviceability , , ). . - , -.

    : - -

    ; - ,

    ; - . ,

    99,5 %, . 3 43 31- . , 3 43 , 1 51 , 3 1 14 . , , , [2].

  • 160

    . , , . :

    1. , t;

    2. (0, ), ;

    3. , .

    , .. f(t)=l-e-t, =const f(t)=e-t () , .. . , t+t , t, t, t, t+t t+0(t). t, t+t t0 0(t).

    t, t+t, t, , t .

    , .

    , . - () -, , , - , [3].

    , -:

    0 ; 1 1 ,

    2, 2 ; 2 2 , 1 ; 3 1 ,

    2, 2 1, -, 2 ;

    4 2 , 1 - 2, , 1 .

    (. .1).

  • 161

    211

    121 211

    11 21

    1 2

    2 1

    1

    1

    2

    2

    1. , .

    - :

    (1.2) t0 , :

    (1.3) -

    :

  • 162

    (1.4), ,

    :

    ,

    . , .

    1. . . Oracle Magazine, 2003. 2. : , . -

    : - "" , 1991. - 23 : . //( /. . .-. .).

    3. .. . / . 2, 2002.

  • 163

    ..

    , . ,

    Grid , . .

    [1] - . - .

    OurGrid , . - . , , [2]. Grid , [3].

    . , -. . . - , . , - , .

    , , . . . - , . .

    , : ; ; ; ; , .

    s ; t -

    , - ;

    Tmax ;

  • 164

    , ?

    ?

    ?

    ?

    1. .

    (p1,p2,) , -

    ; Pay . , Z :

    Z (s, t, Tmax , (p1,p2,), Pay) (1) : gi i- ; ui

    , i- - .

  • 165

    - . :

    )(max iicount utT = (2) Twait . :

    waitcount TTT +max (3) :

    =

    =S

    iii utgCost

    1)( (4)

    :

    =

    == S

    iii utg

    PayCostPay

    1)(

    Pr (5)

    1.

    . .

    1. ..

    // (HPC2008). 8- . , 17-19, 2008 : . .2008.. 28-30.

    2. Cirne W., Brasileiro F., Andrade N., Costa L.B., Andrade A., Novaes R., Mowbray M. Labs of the World, Unite!!! http://copin.ufcg.edu.br/twiki-public/pub/OG/OurPublications/LabsoftheWorldUnitev19.pdf

    3. Buyya R., Giddy J.,Abramson D. A case for economy Grid architecture for service-oriented Grid computing// Proceedings of the Heterogeneous Computing Workshop, April 2001 http://gridbus.org/papers/ecogrid.pdf

  • 166

    .. , ..

    , -

    , . , . -, . , -. , 90% .

    - , , .

    : . : , - , --. , - , ( ), . , , , . FACR. , - . FACR - (). -100 Router, MPI.

    - , - - . , , - , .

    . , - . - -- FACR -, , , . - . - , .

  • 167

    -

    .. . . . ,

    [email protected]

    MPI (Message Passing Interface). , . , , - . ( ). - , , , , , . . , , - , , . -, - .

    .

    GDWENOPAR_3D, - . : - (Init3D), (GasDinStep), (ExchangeBoundConditions).

    - MPICH 1.2.5, MPI.

    . , - (BoundCond). .

    -. , - , , -.

    - - (Decomp3D) (VisualRes). Decomp3D , - . VisualRes , DISLIN.

    - 2%- - .

    DISLIN.

  • 168

    - , - .

    - , . - ( - ), -, . - - .

    ,

    . -

    .

  • 169

    .. , ..

    .

    , , , . . [1].

    : , (). - . , . - . - : ( ), , .. - - .

    , , , . , . - . - )( 2N . , , - . , - 10 P-IV 3 . , , , , , , - . - , - .

    - .

    : 1. 2. () . 3. ,

    (). . -

    [2]:

  • 170

    1. .

    ++=+ , (1)

    () ; () ; () t *= ,

    , t . ; ()

    t *= , , t . -;

    () (). ;

    += 21 , (2)

    2 () - t. ;

    1 () . ; ( , , 2 , 1 , , ,t ) ( ,t )

    : += () .

    12 = . S ,

    F .

    )cos(S

    F = , (3) .

    (1), (3) :

    tF

    *= . . , n . Si , i = 1, 2,, n -

    i . -

  • 171

    , -, .

    Si : iiiiii ++=++ , (4)

    iii += 21 , (5) ii = + )1( , (6)

    i Si Si 1 . - i ; i ; i ; i ; i2 ; i1 ; i , - 1 ( ) Si . - ii= + )1( , Si Si 1+ . - .

    - , - , .

    . ii = + )1( , -

    . , m - ( 2).

    2. .

    n ( 3). m .

  • 172

    3. .

    , , , ii= + )1( i, , . , . . , - (4), (5) (6).

    , . Si , m - n.

    , , . :

    = =m

    i i

    i

    tS

    1,

    i - i- , Si - - i- , t - .

    , - - .

    , , :

    1. . 2. . 3. . -

    . -, .

    - . , , - ( , ).

  • 173

    , -[4]. - - . . , .

    Nbasm = )/( , s , s 0,1 0,15, a b - , , . , , ,

    8Nm = . )( N . , , -

    . . - .

    , - .

    1. . . :

    . - . : , 1988. 312 . 1150 . - ISBN 5-286-00017-7 2. . . - -

    : / . . . - . : , 2007. - 279 . - .: . 265.. - ISBN 978-5-02-033651-3

    3. .., .. - .: -, 2002. - 608 . - 3000 . - ISBN 5-94157-160-7

    4. .. . : - . -, 2002. 128 . ISBN 5-7511-1501-5

  • 174

    .. - "", ,

    [email protected]

    : -

    . . . - , - .

    , , . , , , - , , . , - , - .

    , , .

    . , . - , - . () - , .

    : - ; - ; - ; - .

    - . . - (, ) , . , [7].

    -, , . , -

  • 175

    -.

    , - - .

    - , .

    -, , -, , , , , , - [5, 6].

    , - .

    . . , - .

    (), -

    , , - [4]. , - . - .

    , . , .

    , . - (, , - ). -, - , .. . - ( ), .

    m- },...,{ 1 mxx=X . , , - . . , , ( ) z . , , () , -, z .

  • 176

    )( x|zP . D , - )(nx , )(nz , Nn ,1= . -. { }10,z . z - )1( x|zP = , :

    )0()0()1()1()1()1(

    )1( ==+======

    zPzpzPzpzPzp

    |zPxx

    xx .

    , - , , . .

    - , , - . , -, , , , .

    -

    , -, . - , - . - , .

    , . , - .

    - .

    , , , , .

    -, , - , - .

    - - . , , .

    , - , : - ;

  • 177

    - ( ) - ; - ; - .

    . - , , - .

    ),,( sz|P wx -

    x z , w s . - , . - , - . , - . MatLab. [1], [2] [3] - , , - . , - .

    - . , - 95%.

    :

    - - .

    - , .

    - . , -

    , . - , .

    1. , .. / .. , ..

    , .. , .. // : , -. 2007. 11. . 20 27.

    2. , .. / .. , .. , .. // : - / . .. 2007. . 65, . 14. . 5 12.

  • 178

    3. , .. / .. , .. , .. // : , . 2007. 11. . 14 19.

    4. , .. -: . . / . . , .. . : - . . -. 1998. 136 .

    5. , .. . . / .. -, .. . .: , 2001. 328 .

    6. , . : . / . . -, 2006. 1104 .

    7. , . . / . . - .: , 1992 -184.

  • 179

    (MPP-) ..

    . .. ,

    , () . (). - (), :

    - , - ;

    - .

    , , .

    MIMD , MIMD p , ( ) .

    MIMD , .

    1. - .

    2. , - (- ).

    3. -.

    [1] , . - , . , (k1/C, C- ). , -, . ,

  • 180

    , - .

    ( ), - . , AX=F, X=X+(F-AX), X=BX+G,

    ikj

    n

    ijij

    kiii

    kj

    i

    jij

    ki gxbxbxbx +++= =

    +

    =+ )()(

    ,)()( 1

    112

    1

    1 .

    . [2] , ,

    0

    1321

    111312111

    313333231

    212232221

    111131211

    =

    tbtbbbbbtbbbb

    bbtbtbbbbbtbtbbbbbtb

    nnnnnnn

    nnnnnnn

    nn

    nn

    nn

    ,

    ,,,,,

    ,

    ,

    ,

    ...

    .....................

    ...

    ...

    ...

    . ,

    11

  • 181

    , E2 {C,D}.

    , E2{D, C} (x,y) (x+Dk,y+Cn), k,nN, .. N1 N2 CD, N1>D, N2>C. , ([N1/D]+1)([N2/C]+1) k,n, k=1,,([N1/C]+1), n=1, ..., ([N2/D]+1), DC , .

    1. v (0). , , 0 mk, ( 0km = fkm), fkm, N, t, t=0, ij, aij, dij .

    2. (xi,yj) i=1.. ([N1/D]+1, j=1.. C ( k,1, k=1, ..., ([N1/D]+1). (1, j) : - , - , - , - 1(x1,yj), 1. (2,j) 1(x2,yj), (1,j) 2(x1,yj). (3,j), (2,j) (1,j) 1(x3,yj), 2(x2,yj) 3(x1,yj), . .

    1(xD,yj) (D,j) D(x1,yj), D-

    1(x2,yj), , 1(xC,y1) . 1 ( ). : 1(xD+1,yj), D(xD+2,yj), , 2(x2D,yj) (. 1 ), 2(xD+1,yj), 1(xD+2,yj), D(xD+3,yj), , 3(x2D,yj) (. 1 ), . . ( D {N1/D} ) - .

    1.

    (D,1)

    (D,C) (1,C)

    )

    1

    1 3

    1

    2

    2 3 4 (1,1)

    1 2 3

    3 2

    4

    4

    4

    (D,1)

    (D,C) (1,C)

    )

    2

    24

    2

    3

    341(1,1)

    234

    4 3

    1

    1

    1

    (D,1)

    (D,C) (1,C)

    )

    3

    3 1

    3

    4

    4 1 2(1,1)

    3 4 1

    1 4

    2

    2

    2

    (D,1)

    (D,C) (1,C)

    )

    4

    42

    4

    1

    123(1,1)

    412

    2 1

    3

    3

    3

  • 182

    D .

    3. D . t=N+1, i,j=1 -

    . , - - . - .

    . , (t+1)- (3) , Q=(N1-1)(N2-1) , t- , t0 - .

    , - . n0 T1=MQt. v(t+1) , Q=N1N2 Tp=(N1N2)(Mt+4t0) ([ n0/C]+1) , , k k t0=0

    k = )D)]D/(n([tnNMN

    TTp 20

    0211++= )4t(Mt)N(N 021

    CD,

    k = CDk

    k =

    =1 .

    , , , .

    , . - , , , , . .

    1. .. - -

    : . ... . . . -, 2008. 197 . - .

    2. .. // : . - . - [ ]

  • 183

    - PowerXCell 8i

    ..

    [email protected]

    () . - , . , . , , .

    . , -- . . , -. . , . - , .

    , . , , , .

    -, . , . , - .

    -, (, - ) -, . [1] -- , : , , ...

    , . CELL [2].

    , , -. , .

  • 184

    PowerXCell 8i. - :

    1. ; 2. ; 3. ; 4. . -

    . , - . , .. . , , . , .

    , .. . . PowerXCell 8i,