The COVID-19 pandemic forced some ALMA Observatory’s teams to change their working models from observatory on-site or office-based to fully remote. The performance results obtained by the groups during this emergency evidenced that a hybrid working model would be suitable to be implemented in the long term, especially for the teams that concentrate their activities out of the observatory site or Santiago’s offices. Science and computing groups were the most suitable teams for adopting a different working model. There were many lessons learned from this experience which contributed to establishing a permanent hybrid model. The ALMA Software group, consisting of 18 engineers, transitioned in this direction taking into consideration all the knowledge learned during the pandemic and developing a smooth and successful experience by maintaining productivity levels and cohesive team spirit despite the physical location of the group members. This paper provides an overview of key considerations, challenges, and benefits associated with the shift towards a hybrid working model. Factors/challenges such as technological infrastructure, communication and collaboration, collaborators' well-being and performance metrics are analyzed from the manager/supervisors' point of view. The paper also describes the challenges that the group will face shortly, and the actions developed to mitigate the risks and disadvantages of the new working environment.
The Joint ALMA Observatory (JAO) decided some years ago to become a data-centric operational facility, basing its operational decision-making processes on evidence and ensuring several efforts to adopt data science practices to its daily operations. Key non-profit collaborations allowed ALMA to work with Dataiku, empowering us to design projects to explore high data volumes and prepare solutions to enable informed operational decisions. To increase the capabilities of the data science platform, JAO invested on an in-house infrastructure, providing a Hadoop ecosystem which allowed processing big datasets in reasonable time. The provisioning of such ecosystems is laborious and expensive in terms of system administration effort, highlighting the need to explore alternatives. JAO sought to collaborate with cloud providers to investigate alternatives, deciding to experiment with Amazon Web Services (AWS). A key element to this decision was flexibility provided, and a practical hands-on explorative approach, which was close to JAO's vision. The relationship, formalized through a Memorandum of Understanding, enabled the development of a proof of concept (PoC) aiming to replicate the existing system on the cloud. Although the PoC might not impress as an ambitious goal, designing an architecture using the broad set of technologies offered by AWS to seamlessly work together with Dataiku was a non-trivial challenge on top of the limited six weeks available to complete it and the continuous learning of technologies and concepts. This paper summarizes our results, lessons learned, and key insights gained during our focused and successful rapid prototyping effort.
KEYWORDS: Antennas, Optical correlators, Observatories, Computing systems, Signal processing, Hardware testing, Current controlled current source, Adaptive optics, Electronics, Control systems
The Atacama Large Millimeter /sub-millimeter Array (ALMA) has been working in operations phase regime since 2013. The transition to the operations phase has changed the priorities within the observatory, in which, most of the available time will be dedicated to science observations at the expense of technical time required for testing newer version of ALMA software. Therefore, a process to design and implement a new simulation environment, which must be comparable - or at least- be representative of the production environment was started in 2017. Concepts of model in the loop and hardware in the loop were explored. In this paper we review and present the experiences gained and lessons learned during the design and implementation of the new simulation environment.
The Multi-Object Optical and Near-infrared Spectrograph (MOONS) will cover the Very Large Telescope's (VLT) field of view with 1000 fibres. The fibres will be mounted on fibre positioning units (FPU) implemented as two-DOF robot arms to ensure a homogeneous coverage of the 500 square arcmin field of view. To accurately and fast determine the position of the 1000 fibres a metrology system has been designed. This paper presents the hardware and software design and performance of the metrology system. The metrology system is based on the analysis of images taken by a circular array of 12 cameras located close to the VLTs derotator ring around the Nasmyth focus. The system includes 24 individually adjustable lamps. The fibre positions are measured through dedicated metrology targets mounted on top of the FPUs and fiducial markers connected to the FPU support plate which are imaged at the same time. A flexible pipeline based on VLT standards is used to process the images. The position accuracy was determined to ~5 μm in the central region of the images. Including the outer regions the overall positioning accuracy is ~25 μm. The MOONS metrology system is fully set up with a working prototype. The results in parts of the images are already excellent. By using upcoming hardware and improving the calibration it is expected to fulfil the accuracy requirement over the complete field of view for all metrology cameras.
As we all know too well, building up a collaborative community around a software infrastructure is not easy. Besides recruiting enthusiasts to work as part of it, mostly for free, to succeed you also need to overcome a number of technical, sociological, and, to our surprise, some political hurdles. The ALMA Common Software (ACS) was developed at ESO and partner institutions over the course of more than 10 years. While it was mainly intended for the ALMA Observatory, it was early on thought as a generic distributed control framework. ACS has been periodically released to the public through an LGPL license, which encouraged around a dozen non-ALMA institutions to make use of ACS for both industrial and educational applications. In recent years, the Cherenkov Telescope Array and the LLAMA Observatory have also decided to adopt the framework for their own control systems. The aim of the “ACS Community” is to support independent initiatives in making use of the ACS framework and to further contribute to its development. The Community provides access to a growing network of volunteers eager to develop ACS in areas that are not necessarily in ALMA's interests, and/or were not within the original system scope. Current examples are: support for additional OS platforms, extension of supported hardware interfaces, a public code repository and a build farm. The ACS Community makes use of existing collaborations with Chilean and Brazilian universities, reaching out to promising engineers in the making. At the same time, projects actively using ACS have committed valuable resources to assist the Community's work. Well established training programs like the ACS Workshops are also being continued through the Community's work. This paper aims to give a detailed account of the ongoing (second) journey towards establishing a world-wide open source collaboration around ACS. The ACS Community is growing into a horizontal partnership across a decentralized and diversified group of actors, and we are excited about its technical and human potential.
The Atacama Large Millimeter /submillimeter Array (ALMA) has entered into operation phase since 2013. This transition changed the priorities within the observatory, in which, most of the available time will be dedicated to science observations at the expense of technical time. Therefore, it was planned to design and implement a new simulation environment, which must be comparable - or at least- be representative of the production environment. Concepts of model in the loop and hardware in the loop were explored. In this paper we review experiences gained and lessons learnt during the design and implementation of the new simulation environment.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.