ARC-CE v6 at ScotGrid Glasgow · 2019-09-17 · yum install nordugrid-release-6-1.el7.noarch.rpm -...

26
ARC-CE v6 at ScotGrid Glasgow Dr. Emanuele Simili 29/08/2019 @ GridPP43

Transcript of ARC-CE v6 at ScotGrid Glasgow · 2019-09-17 · yum install nordugrid-release-6-1.el7.noarch.rpm -...

Page 1: ARC-CE v6 at ScotGrid Glasgow · 2019-09-17 · yum install nordugrid-release-6-1.el7.noarch.rpm - install the ARC-6 package nordugrid-arc-arex with yum: yum -y install nordugrid-arc-arex

ARC-CE v6 at ScotGrid Glasgow

Dr. Emanuele Simili 29/08/2019 @ GridPP43

Page 2: ARC-CE v6 at ScotGrid Glasgow · 2019-09-17 · yum install nordugrid-release-6-1.el7.noarch.rpm - install the ARC-6 package nordugrid-arc-arex with yum: yum -y install nordugrid-arc-arex

Outline

ScotGrid Glasgow: Gareth, Gordon, Sam, me (Emanuele)

ARC-CE: The Advanced Resource Connector (ARC) Computing Element (CE) middleware,

developed by the NorduGrid Collaboration, is an open source software solution enabling

e-Science computing infrastructures with emphasis on processing of large data volumes.

• New cluster prototype

• Installing ARC 6

• Authentication with ARGUS

• Adding HTCondor

• Testing

• Documentation & Code

Page 3: ARC-CE v6 at ScotGrid Glasgow · 2019-09-17 · yum install nordugrid-release-6-1.el7.noarch.rpm - install the ARC-6 package nordugrid-arc-arex with yum: yum -y install nordugrid-arc-arex

Production Cluster Map

Page 4: ARC-CE v6 at ScotGrid Glasgow · 2019-09-17 · yum install nordugrid-release-6-1.el7.noarch.rpm - install the ARC-6 package nordugrid-arc-arex with yum: yum -y install nordugrid-arc-arex

New Cluster Map (compute)

Page 5: ARC-CE v6 at ScotGrid Glasgow · 2019-09-17 · yum install nordugrid-release-6-1.el7.noarch.rpm - install the ARC-6 package nordugrid-arc-arex with yum: yum -y install nordugrid-arc-arex

What does it look like in reality?The core of our new cluster built in two test-racks

( soon to be dismantled and moved to the DC)

Physical machines (compute & services):

- Server DELL 440 (3x), Proxmox (VMs)

- Server DELL 640 (1x), Provisioning

- HP ProLiant DL60 Gen9 (7x), WorkNode

- Server DELL 410 (1x), Squid

- Extreme switch 1 Gb (1x), IPMI net

- Lenovo switch 10 Gb (2x), internal net

- Lenovo switch 40 Gb (1x). uplink

Virtual machines (services):

- DNS (3x), DHCP (1x), NAT (2x) - ARGUS (1x)

- HTCondor Manager (1x) - ARC-CE (2x)

Ceph testing (storage):

- Server DELL 740 (8x) CEPH storage,

- Server DELL 440 (3x) CEPH servers

- Server DELL 440+ (2x) CEPH cache

Page 6: ARC-CE v6 at ScotGrid Glasgow · 2019-09-17 · yum install nordugrid-release-6-1.el7.noarch.rpm - install the ARC-6 package nordugrid-arc-arex with yum: yum -y install nordugrid-arc-arex

ProxMox HypervisorWith the new cluster the desire was to move key components to a virtualised environment to improve

resiliency. Investigated Openstack, oVirt and others but they were complicated to operate with our

desired level of resiliency.

Settled on ProxMox:

• Debian based appliance.

• Built from 3x R440 with SSDs for

VM storage.

• Configured using CEPH for

distributed fault tolerance.

• Can set High Availability for

automatic VM migration on fault.

• Allows VMs and Containers to

be centrally managed.

• Currently running key services

for the new production

environment.

Page 7: ARC-CE v6 at ScotGrid Glasgow · 2019-09-17 · yum install nordugrid-release-6-1.el7.noarch.rpm - install the ARC-6 package nordugrid-arc-arex with yum: yum -y install nordugrid-arc-arex

Why ARC 6 and not HTCondor-CE?

• We investigated the option of deploying an HTCondor-CE, instead of

an ARC-CE and started some initial experiments using details from

Steve Jones (see GridPP & HepSysMan talks) along with the OSG

documentation …

• We couldn’t find an exact match to replicate our production set-up

(we wanted multiple CEs with a separate HTCondor Collector running

on a different host) in the limited time we had we found the

configuration of security between the CE and the multiple Collectors

proved tricky ...

So, to meet the CentOS7 upgrade timetable… we have decided to go for ARC-CEs, as we were

more familiar with the configuration and had confidence in our understanding of the technology.

Plus, NorduGrid released a new ARC-CE v6 just at the right time ...

Page 8: ARC-CE v6 at ScotGrid Glasgow · 2019-09-17 · yum install nordugrid-release-6-1.el7.noarch.rpm - install the ARC-6 package nordugrid-arc-arex with yum: yum -y install nordugrid-arc-arex

ARC-CE Installation

We first installed the newly released ARC6 following the instructions on

NorduGrid website: http://www.nordugrid.org/arc/arc6/admins/arc6_install_guide.html

http://www.nordugrid.org/arc/arc6/admins/try_arc6.html

Pre-requisites: The ARC server needs a valid Host Certificate and an external IP

The installation steps are as follows:

- install the NorduGrid GPG key and add the NorduGrid repository:

rpm --import http://download.nordugrid.org/RPM-GPG-KEY-nordugrid-6

wget https://download.nordugrid.org/packages/nordugrid-release/releases/6/centos/el7/x86_64/nordugrid-release-6-1.el7.noarch.rpm

yum install nordugrid-release-6-1.el7.noarch.rpm

- install the ARC-6 package nordugrid-arc-arex with yum:

yum -y install nordugrid-arc-arex

- finally configure and start the ARC service(s):

vi /etc/arc.conf

arcctl service enable --as-configured --now

The default configuration enables to run fork jobs on the ARC server.

And with arcctl test-ca we can set up a dummy C.A. and test job

submission !

Page 9: ARC-CE v6 at ScotGrid Glasgow · 2019-09-17 · yum install nordugrid-release-6-1.el7.noarch.rpm - install the ARC-6 package nordugrid-arc-arex with yum: yum -y install nordugrid-arc-arex

Proper ConfigurationConfiguring the full batch system needs few more configuration bits (/etc/arc.conf):

About authentication, we need to configure the lcmaps plug-in

(/etc/lcmaps/lcmaps.db).

See next slide …

Integration with HTCondor

Integration with ARGUS

lcmaps plug-in

Page 10: ARC-CE v6 at ScotGrid Glasgow · 2019-09-17 · yum install nordugrid-release-6-1.el7.noarch.rpm - install the ARC-6 package nordugrid-arc-arex with yum: yum -y install nordugrid-arc-arex

ARC-CE integrating with ARGUS

[mapping]map_with_plugin = all-vos 30 /usr/libexec/arc/arc-lcmaps %D %P liblcmaps.so /usr/lib64 /etc/lcmaps/lcmaps.db arc

path = /usr/lib64/lcmapsverify_proxy = "lcmaps_verify_proxy.mod“

"-certdir /etc/grid-security/certificates“"--discard_private_key_absence“"--allow-limited-proxy“

pepc = "lcmaps_c_pep.mod“"--pep-daemon-endpoint-url https://argus:8154/authz“

"--resourceid http://gla.scotgrid.ac.uk/ce01“"--actionid http://glite.org/xacml/action/execute“"--capath /etc/grid-security/certificates/“"--certificate /etc/grid-security/hostcert.pem“"--key /etc/grid-security/hostkey.pem“

# Policies:arc:verify_proxy -> pepc

Trust-Chain:ARC CE needs IGTF CA certificates deployed to

authenticate users and other services, such as storage elements. To deploy IGTF CA certificates to ARC CE host, run:

arcctl deploy igtf-ca classic

…then we can do job submission from real certificates !

ARC-CE can do the authentication chain either with a static gridmap or using external plug-ins for

user mapping (defined in the arc.conf). In particular, the lcmaps plug-in is used:

The lcmaps plug-in points to the ARGUS server:

(/etc/lcmaps/lcmaps.db)

Page 11: ARC-CE v6 at ScotGrid Glasgow · 2019-09-17 · yum install nordugrid-release-6-1.el7.noarch.rpm - install the ARC-6 package nordugrid-arc-arex with yum: yum -y install nordugrid-arc-arex

UMD4 & Yum PriorityAbout installing lcmaps from the UMD4 repository …

When UMD4 is added from the rpm, by default it sets itself with priority=1 (the highest) and

therefore it will take over any other repository ... even if UMD4 packages are older versions.

This is the case for ARC6 (new in NorduGrid) against ARC5.4 (old in UMD4).

The priority of NorduGrid repo must be adjusted before installing ARC6 !

(e.g., set it equal to the priority of UMD4, so the newest package is selected)

cd /etc/yum.repos.d

vi nordugrid*.repo set: priority=1

yum clean all

Check what repository will be used before install:

yum info nordugrid-arc-arex hopefully: nordugrid-*

Alternatively, the UMD4 repo can be temporarily disabled or removed ...

Page 12: ARC-CE v6 at ScotGrid Glasgow · 2019-09-17 · yum install nordugrid-release-6-1.el7.noarch.rpm - install the ARC-6 package nordugrid-arc-arex with yum: yum -y install nordugrid-arc-arex

ARC-CE adding HTCondor

If you have a HTCondor batch system you just need to add this line in /etc/arc.conf (… and the HTCondor Scheduler must be installed on the ARC-CE server)

Note:

The HTCondor batch system doesn’t ‘know’ about the ARC-CE (no reference in condor config).

So ... configure the HTCondor cluster first, then install ARC-CE on the Scheduler node !

[lrms]lrms = condor

Our prototype batch system is composed of 7

physical machines as WorkNodes, and

2 virtual machines: the HTCondor Manager and the

Scheduler (which is also the ARC-CE).

Page 13: ARC-CE v6 at ScotGrid Glasgow · 2019-09-17 · yum install nordugrid-release-6-1.el7.noarch.rpm - install the ARC-6 package nordugrid-arc-arex with yum: yum -y install nordugrid-arc-arex

ARC-CE integration testThe full chain (ARC-CE + ARGUS + HTCondor) has been extensively tested by submitting jobs with our

ScotGrid certificate from a remote ARC-Client. Example:

- generate an arc-proxy:

arcproxy -S vo.scotgrid.ac.uk

- check info about the ARC-CE and then submit the job:

arcinfo -c ce01.gla.scotgrid.ac.uk

arcsub -c ce01.gla.scotgrid.ac.uk sieve.rsl

If the ‘submit’ is successful, it returns a string with the Job-Id (e.g., gsiftp://ce01.gla.scotgrid.ac.uk:2811/jobs/blabla).

Use this string to check the status of the job (arcstat) and retrieve the output when job is finished (arcget):

arcstat gsiftp://ce01.gla.scotgrid.ac.uk:2811/jobs/blabla

arcget gsiftp://ce01.gla.scotgrid.ac.uk:2811/jobs/blabla

The command arcget clears the job and copies the output to a local folder, named after the job-id:

ls blabla/

cat blabla/stdout

For this test job (Eratostene's sieve), the output should be a list of prime numbers from 1 to 1000.

For testing the Squid integration, it is sufficient to run a job which does ls on CVMFS folders.

Page 14: ARC-CE v6 at ScotGrid Glasgow · 2019-09-17 · yum install nordugrid-release-6-1.el7.noarch.rpm - install the ARC-6 package nordugrid-arc-arex with yum: yum -y install nordugrid-arc-arex

ARC-CE adding accounting

[arex/jura]loglevel=DEBUG

[arex/jura/archiving]

[arex/jura/apel:egi]targeturl=https://mq.cro-ngi.hr:6162topic=/queue/global.accounting.cpu.centralgocdb_name=UKI-SCOTGRID-GLASGOWbenchmark_type=HEPSPECbenchmark_value=8.74use_ssl=yes

Turning on accounting in ARC6 is simple, only takes 2 steps:

1) Register to GOC ,

2) Add this bit to /etc/arc.conf :

Page 15: ARC-CE v6 at ScotGrid Glasgow · 2019-09-17 · yum install nordugrid-release-6-1.el7.noarch.rpm - install the ARC-6 package nordugrid-arc-arex with yum: yum -y install nordugrid-arc-arex

ARC-CE required BDII information[infosys]loglevel = INFO

[infosys/ldap]bdii_debug_level= INFO

[infosys/nordugrid]

[infosys/glue2]admindomain_name = UKI-SCOTGRID-GLASGOW

[infosys/glue2/ldap]

[infosys/cluster]advertisedvo = atlasadvertisedvo = vo.scotgrid.ac.ukadvertisedvo = opsadvertisedvo = dteamnodememory = 6000defaultmemory = 2048nodeaccess = outboundalias = ScotGrid Glasgowcluster_location = UK-G128QQcluster_owner = University of Glasgow#homogeneity = False

During our tests, we have found few bugs in ARC6:

We wanted to run ops-test. To do this we had to enable the BDII & register in GOC …

Problem: the update of BDII failed if the provided information is not complete. But it failed silently: no error message was given!(in particular: job submission failed if the user specify a minimum memory requirement but the site did not publish one!)

We think these are the minimal settings it can work with ...

We also discover a race condition:

systemd starts the update and then terminates before the update is finished, causing the update to fail !

This was reported by Gareth, and NorduGrid fixed it in ARC6.1 !

Page 16: ARC-CE v6 at ScotGrid Glasgow · 2019-09-17 · yum install nordugrid-release-6-1.el7.noarch.rpm - install the ARC-6 package nordugrid-arc-arex with yum: yum -y install nordugrid-arc-arex

Add a VONew VOs are added to the cluster via an Ansible role, which takes care of the following:

Beta testing:

the batch system is currently running pilot jobs from ATLAS

Results:

jobs can are running and everything seems to work fine!

# Create pool accounts and home directories on ARC-CE and worker nodes- include_tasks: pool-accounts.yml

# Create vomsdir configuration everywhere: create VOMSDIR & sub-folders, generates LSC files- include: vomsdir.yml

# Create vomses configuration on worker nodes: VOMSES folder and content- include: vomses.yml

# Create mapfiles on ARGUS- include: mapfiles.yml

# Perform additional configuration on ARGUS: touch files for user mappings & add VO entry to Argus policy- include: argus-config.yml

# Perform additional configuration on ARC-CE: add advertisedvo entry to arc.conf- include: arc-ce-config.yml

So far, we have added (and made Ansible 'roles' for) the following VOs:

- vo-atlas

- vo-dteam

- vo-gluex

- vo-ops

- vo-scotgrid

Page 17: ARC-CE v6 at ScotGrid Glasgow · 2019-09-17 · yum install nordugrid-release-6-1.el7.noarch.rpm - install the ARC-6 package nordugrid-arc-arex with yum: yum -y install nordugrid-arc-arex

ARC-CE has a new arcctl

New in ARC6 is arcctl, which gives an easy way to configure services, access accounting and get info ….

(see some examples of arcctl commands)

Page 18: ARC-CE v6 at ScotGrid Glasgow · 2019-09-17 · yum install nordugrid-release-6-1.el7.noarch.rpm - install the ARC-6 package nordugrid-arc-arex with yum: yum -y install nordugrid-arc-arex

Code is GrowingWe have collected all configuration/installation procedures in a clean

Ansible environment, safely stored in our GitLab repository.

As the code has been growing and becoming more consistent, we

thought is about time to share it.

see Gordon's talk!

What we have built is a collection of fully automated procedures to

build a Tier2 site from bare metal

But, whatever automated tools are

out there, we all know that site admins

will still prefer to go the hard way ...

Page 19: ARC-CE v6 at ScotGrid Glasgow · 2019-09-17 · yum install nordugrid-release-6-1.el7.noarch.rpm - install the ARC-6 package nordugrid-arc-arex with yum: yum -y install nordugrid-arc-arex

Documentation is

Growing tooCluster information and administrative procedures are safely organized in our

internal Wiki, which is linked to the relevant Ansible roles in GitLab.

As both documentation and code grow, things start getting hard to manage

and pages keep getting out of synch ... hopefully things will converge.

However, from my un-expert standpoint, I have learned to make

documentation part of the procedures: Learn Do Write

(… and repeat the cycle a few times, as we normally do).

These pages are currently available only to us (Glasgow), but we are ready

to share our documentation as we are sharing our Ansible roles.

http://dokuwiki.beowulf.cluster/

Page 20: ARC-CE v6 at ScotGrid Glasgow · 2019-09-17 · yum install nordugrid-release-6-1.el7.noarch.rpm - install the ARC-6 package nordugrid-arc-arex with yum: yum -y install nordugrid-arc-arex

END

Dr. Emanuele Simili 29/08/2019 @ GridPP43

Page 21: ARC-CE v6 at ScotGrid Glasgow · 2019-09-17 · yum install nordugrid-release-6-1.el7.noarch.rpm - install the ARC-6 package nordugrid-arc-arex with yum: yum -y install nordugrid-arc-arex

Physical Machines

Page 22: ARC-CE v6 at ScotGrid Glasgow · 2019-09-17 · yum install nordugrid-release-6-1.el7.noarch.rpm - install the ARC-6 package nordugrid-arc-arex with yum: yum -y install nordugrid-arc-arex

Virtual Machines

Page 23: ARC-CE v6 at ScotGrid Glasgow · 2019-09-17 · yum install nordugrid-release-6-1.el7.noarch.rpm - install the ARC-6 package nordugrid-arc-arex with yum: yum -y install nordugrid-arc-arex

HTCondor ClusterThe prototype batch system is composed of 7 physical machines as CentOS7 WorkNodes &

2 virtual machines: the HTCondor Manager and the Scheduler (which is also CE).

Installation steps:

• Install HTCondor with yum from University of Wisconsin–Madison repo

https://research.cs.wisc.edu/htcondor/yum/repo.d/htcondor-stable-rhel7.repo

• Set the important configuration files (/etc/condor/config.d):

01-daemons.config Daemons settings (see table)

network.config internal IP address

security.config local settings ...

Note: The HTCondor batch system does no need to know

about the ARC-CE (no reference in the config). So ... install

HTCondor first, then the ARC-CE on the Scheduler node.

We have Ansible roles for the complete set-up of

WorkNodes and Services (takes care of repositories,

packages installation and configuration).

Page 24: ARC-CE v6 at ScotGrid Glasgow · 2019-09-17 · yum install nordugrid-release-6-1.el7.noarch.rpm - install the ARC-6 package nordugrid-arc-arex with yum: yum -y install nordugrid-arc-arex

ARGUS

Note: the link between ARC-CE and ARGUS is made

on the ARC server via LCMAPS. So ... no special steps

are needed when installing ARGUS.

Installation of the ARGUS meta-package (via Ansible):

Repositories are added (Epel, EGI, UMD4, Globus, ARGUS), Default TCP ports are enabled for ARGUS (8150, 8152, 8154) and LDAP (2170), Globus and gLite packages are installed (with default config), The BDII resource package is installed and configured (custom LDAP), The ARGUS package is installed and its components/services are configured, ARGUS services are started in the correct sequence: PAP, PDP, PEPD.

Ingredients to make it work (what must be present on the ARGUS server):

Map-files must be present on ARGUS: grid-mapfile, groupmapfile, voms-grid-mapfile ; Empty ‘pool-mapping’ files on ARGUS (/etc/grid-security/gridmapdir/*) must match the pool

accounts on the HTCondor WorkNodes ; VOMSDIR directory on ARGUS must contain LSC (=List of Certificates) files for each

supported VO (/etc/grid-security/vomsdir/*/*.lsc) ; Policy files on ARGUS must be updated with supported VOs: /etc/policies/svr029.policy ; Up-to-date host certificate and external IP must exist .

The ARGUS server is installed on a CentOS7 Virtual Machine (svr029):

Page 25: ARC-CE v6 at ScotGrid Glasgow · 2019-09-17 · yum install nordugrid-release-6-1.el7.noarch.rpm - install the ARC-6 package nordugrid-arc-arex with yum: yum -y install nordugrid-arc-arex

SquidAs the WorkNodes use CVMFS, we built a Squid server to act as cache for the cluster: the new Squid

is installed on a physical DELL 410 machine placed in the same test-racks.

Hardware specs as follows:

Hardware: DELL R410

CPU: Xeon 4core @ 2.40 GHz

RAM: 16 Gb

HD(s): 3 * 250 Gb

Name: biscotti

IPv4 (int): 10.x.x.xx

IPv4 (ext): not yet

The Squid server was also configured remotely with Ansible, steps are as follows:

The root logical volume / is extended proportionally to the number of WORKERS and CACHE_MEM,

The CERN Frontier yum repository is added,

Frontier Squid is installed with yum,

The main configuration file (/etc/squid/customize.sh) is generated with the given parameters,

The Squid service is started.

Network ports: Frontier squid communicates on ports 3128 (TCP) and 3401 (UDP).

Note: the Ansible role we use to installs CVMFS also sets

the URL of this Squid on all worknodes

infrastructure.yml

squid_server_host: biscotti.beowulf.cluster

squid_server: "{{ squid_server_host }}:3128"

Page 26: ARC-CE v6 at ScotGrid Glasgow · 2019-09-17 · yum install nordugrid-release-6-1.el7.noarch.rpm - install the ARC-6 package nordugrid-arc-arex with yum: yum -y install nordugrid-arc-arex

ARC-CE (notes)

Slides on initial install

1) Simple submission with built in CA

2) Add prod CA with fixed user mappings

3) Add central auth with new ARGUS

4) Add batch farm

5) Add required RTE

6) Full integration testing

New tooling, should highlight arcctl

Slides on encountered issues

1) Nagios Probes

2) BDII info, silent failures

3) BDII not updating with system (reported & fixed in 6.1)

4) Empty RTE causing failuers (reported & fixed in 6.1)