WBIC-HPHI Platform End-of-Life

Important

Update 14th June 2021

This service has now reached end of life and has closed.

Should you wish to access any unmigrated data, please contact our support team at support@hpc.cam.ac.uk for assistance in migrating it to an alternative location.

All unmigrated data after Friday 30th of July will be deleted

Note

Update 25th May 2021

Due to ongoing feedback from PIs, we have further extended the previously announced deadline for the closure of all access to the platform by an additional 2 weeks to help ensure as many people as possible are able to complete their migration over to CSD3 in time.

Note that we will still shut-down the compute cluster portion of the WBIC-HPHI platform on the previously announced date of 31st May.

However we will retain access to the login nodes up to 14th June so that users can still access their data and facilitate data migrations to CSD3.

Note

Update 21st April 2021

We have pushed back the previously announced deadlines by 1 month to help ensure a smooth migration of users over to CSD3.

Important

The WBIC-HPHI platform, funded by a 2015 MRC infrastructure grant, is closing as planned at the end of May, due to the hardware it runs on having completed its serviceable lifetime.

This will mean that the service will close on Monday 31st May. However, the functionality of the service has been absorbed into the University-wide CSD3 platform, albeit without the subsidy that has applied for large data and compute users on the HPHI platform.

Un-migrated data still on the platform will be deleted after Friday 30th July.

Users who wish to continue working with the platform should begin moving over to our central CSD3 cluster (https://www.hpc.cam.ac.uk/high-performance-computing).

Key Deadlines (Updated 25th May)

What will happen after Monday 31st May

  • The WBIC compute cluster nodes will be shut down and no user jobs will be able to be submitted to the cluster.

  • Access to the WBIC login nodes and all filesystems (/lustre, /data, /home) will continue, to facilitate data migrations off platform to CSD3 or elsewhere.

What will happen after Monday 14th June

  • The WBIC login nodes will be shut down and the usual access to the existing WBIC platform will cease.

  • All data still on the platform will only be accessible on request (not including PACS data, see PACS Access). Access will be provided in a read-only mode in cases where it is necessary continue migrating data onto CSD3. The intention is that all users will have either completed migrating their data to CSD3 or this migration will be initiated and actively ongoing by 14th June.

What will happen after Fri 30th July

  • All remaining non-migrated data on the platform (/wbic, /data and /home filesystems), will be deleted, and hardware decommissioned.

Migrating to CSD3

Compute Services and Software

It will be necessary for users to apply for new accounts on the CSD3 cluster, which can be done via this form.

CSD3 gives all users an allocation of free usage, which is operated at a Service Level (SL3) outlined here: https://docs.hpc.cam.ac.uk/hpc/user-guide/policies.html#service-levels

CSD3 also offers paid-access, which operates under a simple pay-for-use mechanism and gives access to priority Service Levels. University of Cambridge users can see the following prices for paid access: https://docs.hpc.cam.ac.uk/hpc/user-guide/policies.html#costs-overview

We are also making every effort to port over all necessary software packages used on the WBIC platform to CSD3, so you can continue working as before.

Logging onto CSD3

As for the WBIC-HPHI system, login is via your University password but your target node is login.hpc.cam.ac.uk rather than the usual wbic-gate nodes. You should be able to set up graphical interfaces in a similar way as on WBIC-HPHI using X2Go or VNC. You will not need to set up the University VPN to access CSD3.

Please see the CSD3 user documentation, in particular:

and contact support@hpc.cam.ac.uk as before if you encounter problems.

Data Storage

Warning

The WBIC platform contained a number of different storage areas for users, all of which will be retired with the closure of the platform.

All data should be migrated to an alternative storage platform on CSD3 or downloaded to another location before the platform is shut-down.

Below we outline alternative services on CSD3 for each storage area that are available for purchase so that you can replicate the WBIC environment as best as possible.

/lustre

The WBIC dedicated Lustre filesystem (/lustre) is replaced by our RDS storage service which provides equivalent large-scale scratch storage.

All CSD3 users get a free 1TB personal scratch space, but all group-shared capacity must be purchased through our Storage self-service gateway Owners of group directories will be contacted separately to make arrangements.

/home

All WBIC users had 100GB of /home space, however on CSD3 this is reduced to 40GB. Users with usage greater than 40GB will be required to either reduce their usage under this amount, or to purchase additional RFS or RDS storage and move excess data to these areas.

/data

WBIC users also had access to the /data storage location, which benefited from the same resilient storage platform as /home, including hourly snapshots and data replication, designed for datasets requiring durable backups.

A similar service is available on CSD3 through our RFS service, which can now be provided as an NFS-mountable dataset on CSD3 designed for highly resilient group-storage with hourly backups.

RFS is sold via the self-service gateway.

Note

RFS is usually sold as an SMB fileshare for users who wish to mount the storage on their desktops/laptops across the Cambridge University Network. However the above-mentioned NFS-share on CSD3 is available on request. Please contact us at support@hpc.cam.ac.uk to discuss if this is of interest.

/lustre/archive

WBIC had a special archive storage area on the /lustre filesystem from which data was migrated off to a tape-based storage service.

CSD3 offers the RCS storage service as an equivalent for this, providing a location for inactive, archive data for long-term storage.

Purchasing links are provided.

PACS Access

Users using data from PACS will be able to continue to do so from the CSD3 login nodes.

To do this you will need to have a CSD3 account already, and to migrate the credentials used to authenticate you to the PACS server, from your WBIC home-directory to CSD3.

This can be performed as follows:

user@wbic-gate-3:~$ dcm_migrate

This should only need to be performed once, and will require your password to be entered. Your access to PACS should then be enabled. If you then wish to run dcmconv.pl to access your scan data on CSD3, you will first need to run module load wbic. You might wish to place this command in your ~/.bashrc file to make sure that you do not have to run it manually at each login. To access PACS data via ImageJ, you will similarly need to run module load ImageJ first.

Actions: What you need to do to migrate to CSD3

1. Apply for a CSD3 user account

Only needed if you don’t have one already, see the following link for details:

2. Build a list of what data you wish to migrate to CSD3

To do this it would help to consider two cases:

2a. Data that you individually own

Eg: under /home, /data, /lustre/scratch …

Note

You can check your usage of /home and /data against your quota using the following:

$ quota -s

If you wish to know how much data you own under /lustre/scratch please raise a support request with support@hpc.cam.ac.uk.

All of this data you should migrate yourself to CSD3, either into your 40GB CSD3 home directory, or into your 1TB free CSD3 scratch space.

If you need more than this, consider purchasing additional RDS or RFS space as needed.

2b. Data that is group-owned

Eg: data under /lustre/group, /lustre/archive or any group-owned data under other locations.

This data will require being moved to an RDS (scratch), RCS (archive) or RFS (resilient, backup) project on CSD3.

Group data should be purchased as a group, which would typically be done by the PI for your research group. Additionally, given the quantities of data involved, we can perform the data migration for your group so that it is carried out as efficiently as possible.

Important

Please encourage your PI to get in touch with us at support@hpc.cam.ac.uk to discuss how we can assist the setup and migration of group data to CSD3.

We will be attempting to contact all known PIs to assist in this process, but being proactive here can help ensure that we can make this transition as seamless as possible for your group.

3. Purchase any required storage projects on CSD3

Purchases can be made via our storage portal. Projects greater than 100TB in size are eligible for a volume pricing discount.

4. Migrate individual files to CSD3

For relatively modest quantities of data less than 1TB, you should be able to migrate these yourself with rsync. An example command to use would be,

user@wbic-gate-3:~$ rsync --verbose --info=progress2 --archive --no-o --no-g \
 --whole-file --update --inplace --rsh \
 "ssh -T -x -o Compression=no -c aes128-gcm@openssh.com" \
 ~/ login.hpc.cam.ac.uk:~/wbic/

This command will migrate everything under your home (~/) into a new folder inside your CSD3 home directory (~/wbic). If you wish to copy files more selectively you can adjust this command to copy just those directories that you wish to move.

When this transfer is complete, re-run it again if you have continued to modify files in your home directory during the period of the transfer. This will ensure that any recently modified files get picked up and also transferred.

Note

As mentioned above under PACS Access, you will need to run dcm_migrate to retain access to the PACS server.

5. Contact support to arrange migration of Group storage

For larger quantities of data under /lustre, we will perform the migration for your group.

Please get in touch with us at support@hpc.cam.ac.uk to start this process.