Satellite Imagery Compression Frameworks

Subject: Tech & Engineering
Pages: 30
Words: 8533
Reading time:
39 min
Study level: Master

Abstract

At present, satellite imagery is one of the most efficient methods to retrieve an accurate visual representation of the landscape. From these considerations, satellite imagery has found immense popularity in cartography, urban planning, agriculture, emergency response, and the studies of climate change. However, comprehensive satellite imagery of the highest quality is also associated with the requirement of extensive data storage capacity and issues of upload speed. One of the methods to mitigate these limitations is by utilizing contemporary techniques of image compression. For instance, raw satellite imagery is generally uploaded in the format of NITF (National Imagery Transmission Format) to increase the overall communication speed. Therefore, it is essential to continually progress the technological advancement in this area to resolve the problem of data storage and communication. The current paper discusses the efficiency of various autoencoders and image compression frameworks in regard to satellite imagery.

Introduction

Motivation

The satellite industry is of great importance to various industries including military, meteorology, safety, climate and environmental monitoring, and landscape mapping. The current trends and advancements in the industries using satellites have proliferated in the recent past, paving way for new inventions[9,73]. With the use of satellites to capture ground images for landscaping, meteorological or intelligence purposes has significantly grown. Competition in the industry has also demanded unprecedented research in image processing technologies to better retrieve information from images. Satellites mainly capture electromagnetic radiation reflected by the earth’s surface [25, 38]. The radiation is emitted by the sun and reflected by the sun, hence, the sensors do not require energy to operate [11, 26].

At present, satellite imagery is one of the most efficient methods to retrieve an accurate visual representation of the landscape. From these considerations, satellite imagery has found immense popularity in cartography, urban planning, agriculture, emergency response, and the studies of climate change [92]. However, comprehensive satellite imagery of the highest quality is also associated with the requirement of extensive data storage capacity and issues of upload speed [94]. One of the methods to mitigate these limitations is by utilizing contemporary techniques of image compression [84]. For instance, raw satellite imagery is generally uploaded in the format of NITF (National Imagery Transmission Format) to increase the overall communication speed [64,71,74]. Therefore, it is essential to continually progress the technological advancement in this area to resolve the problem of data storage and communication. The current paper discusses the efficiency of various autoencoders and image compression frameworks in regard to satellite imagery

Autoencoders are comprised of three main parts: An encoder, a bottleneck, and the decoder. The encoder is mainly tasked with compressing the input data, usually a satellite image into an output that is several times smaller than the input[34]. The bottleneck is the most essential part of an autoencoder as it houses the knowledge base of the compressed versions of the input [56]. Its contents are usually compressed and represent whatever is known about a particular representation, mainly, knowledge about the input data. The decoder unzips the compressed data and converts it back to its original format. The output is essential in that it is compared to ground truth for accuracy evaluations [46,52,56]. Hence, the encoder, bottleneck, and decoder must be effective to preserve the information portrayed in the images.

There exist different technologies, tools, and approaches used in studying satellite images. Images captured over different times present a clear change in natural and artificial developments such as soil erosion, changes in levels of carbon emission, infrastructural development, or waste build-up. Irrespective of the study approach employed, there are some static factors that must be considered in studying satellite images [38]. They include scale, patterns, shapes and textures, and colors. The results obtained are always compared to prior knowledge to better understand the imagery [25]. For instance, establishing water bodies such as lakes, rivers or oceans is primarily dependent on their color and shapes. True color satellite images use visible light wavelengths: red, green, and blue. This implies that the images are similar to what a normal human eye would see from space. In true-color images, features appear in a detailed manner and are easy to decipher and understand. However, images that use false colors may include unanticipated colors which might be difficult to understand from a natural point of view.

The study and interpretation of satellite images have over the years grown with the development of new technologies to model, interpret and understand different details presented therein. Modern sensors capture images with high resolution, translating to large sizes which might be challenging to store or transmit [38]. As a result, image compression has been widely adopted to reduce the image byte size while retaining the image quality at acceptable qualities[5]. The image size reduction allows the files to be stored, transmitted, or processed within confined computing environments while retaining the detailed information contained in the images [4,6].

Problem Statement

Modern satellite sensors capture high-resolution images which might be challenging to store, process, or transmit. The high-resolution images are large in size (total number of bytes). Such images need to be compressed for processing, storage, or transmission as required. However, even though there exist different file compression technologies, the aftermath quality of the image must be retained [9]. The quality of decompressed images is a necessary factor and helps regain the detailed information presented in the images, while also testing the efficiency of the compression tools and approaches. Satellite imagery remains the most effective way of capturing and representing information on the landscape. The advancement of satellite image-capturing sensors, as mentioned above, results in large-sized files. The advantages of existing image compression technologies must be harnessed to improve them or produce better edge-cutting technologies in the future. On the other hand, the cons must be also mastered to avoid their impact on image compression and decompression.

Objectives

The main objective of this research is to study the different image compression technologies. Other objectives include:

  • Study the trend in satellite number and size of satellite images.
  • Identify different image classification technologies in existence.
  • Describe the image clarification techniques used in different image compression algorithms.
  • Investigate the performance of different image classification approaches.
  • Recommend the best image classification technology
  • Recommend a way forward if the current image classification technologies do not meet the current market needs and standards.

Structure

The paper consists of four main chapters: introduction, literature review, methodology, and analysis evaluation. The first chapter presents an introductory overview of image compression and the trends in the number and sizes of satellite images in recent years. The chapter also presents the study objectives which guide the literature review section, methodology, and analysis sections. The second chapter is the literature review and presents previous works on image capturing, compression, storage, processing, and decompression. The methodology chapter presents the research philosophy, approach, and the different approaches employed therein. It also presents the data collection techniques used. The analysis evaluation chapter presented the techniques used to evaluate the performance of different image compression technologies. It also paves way for the conclusion and recommendations.

Literature Review

Introduction

The proliferate growth in the number and size of satellite images captured in recent years has spurred the need to develop new, better, and more efficient compression techniques. The increase in size is a result of the improvement in the satellite sensor capabilities which capture electromagnetic radiation reflected by the surface of the earth with higher precision than older generations. This research explores different image compression and decompression techniques employed in the storage and processing of satellite images. As of date, satellite images are by far the most effective way of representing landscape. The information captured by satellites is essential used by urban planners, weather and climate change analysis, soil scientific, military, and intelligence agencies to track different progressive activities on the surface of the earth [10]. This chapter presents a review of existing literature from articles published in the last five years (since 2018) with respect to the capture, storage, compression and decompression, and processing of satellite images. The chapter begins by exploring the need, history, advancement in quality and storage, as well as the different techniques used to compress images. The chapter also explores the advantages and disadvantages of different image compression techniques when applied to satellite images.

Doubts of Quantum Computing In Space

The world has witnessed some astonishing examples of quantum algorithms that have shown to exceed the finest conventional algorithms in recent decades.However, for the vast majority of issues, it is presently uncertain if quantum algorithms can give a benefit, and if so, how much, or how to develop quantum algorithms that provide such benefits. Many of today’s most difficult computer issues are solved using heuristic algorithms that have not been mathematically shown to outperform other techniques but have been experimentally proven to be successful. While quantum heuristic algorithms have been proposed, empirical testing will only be possible once quantum computation hardware has been developed. The next several years will be exciting as quantum heuristics are empirically tested.

Many difficult computational issues must be solved in order for NASA missions to be successful. The ability to solve even more difficult computational problems to support better and greater autonomy, space vehicle design, rover coordination, air traffic management, anomaly detection, large data analysis and data fusion, and advanced mission planning and logistics is critical to the success of such future missions [50]. NASA Ames Research Center boasts a world-class supercomputing facility with one of the world’s most powerful supercomputers to handle NASA’s significant computational demands. In 2012, NASA established the Quantum Artificial Intelligence Laboratory (QuAIL) at Ames Research Center to investigate the possibilities of quantum computing for future agency missions. NASA collaborated with Google and USRA to hold the first hackathons the year that followed [50]. However, the application of quantum computing in space exploration is in its infancy stages and posses great challenges which might be difficult to face.

Quadratic Unconstrained Binary Optimization (QUBO) are solved using Quantum annealers with the following cost function:

Formula

Equation 1The cost function of Quantum annealers

Where:

x ∈ {0, 1}n is the vector value of integer based variables.

{ai , bi,j } are real coefficients

Current quantum annealers, for example, D-Wave 2X, are designed using superconducting materials and operate at temperatures in the tens of milli-Kelvin range. The processors utilise superconducting flux qubits [4], which are superconductor loops sandwiched by Josephson junctions and tailored to produce a persistent current in the loop when an external flux is introduced. The clockwise and counter-clockwise flow of currents, which correspond to +1 and -1 values of the spin variable sj for qubit j, are the qubit’s computational foundation. The system is evolved under the time-dependent Hamiltonian during quantum annealing.

H(t) = A(s)H0 + B(s)H1

In this case, H0 is the initial Hamiltonian.

H1 is the QUBO form of the Hamiltonian problem.

Engineering, building, and programming quantum computers is extremely complex [70]. As a result, they are hampered by mistakes such as noise, malfunctions, and the loss of quantum coherence, which is critical to their functioning but breaks down before any nontrivial program can finish.

Table 1. Satellite imagery datasets.

Provider Satellites imagery
USGS Earth explore Landsat
Spy satellites images
Hyper-spectral
Sentinel Open Access Hub Sentinel-1
Sentinel-2
NASA Earthdata Search Land Cover
Derived Data
NOAA Data Access Viewer Aerial Satellite Imagery
DigitalGlobe Open Data Program Sample Data
Open Data program
Geo -Airbus defense Sample imagery
NASA worldview Raw scientific data
NOAA Class Oceanic imagery
Atmospheric imagery
Environment and climate data
National Institute of space research China–Brazil Earth Resources Satellite program (CBERS) data
Bhuvan Indian Geo-Platform of ISRO Normalized Difference Vegetation Index Global Coverage
CARTODEM imagery
JAXA’s Global ALOS 3D World Earth imagery
VITO Vision PROBA-V
SPOT-Vegetation
METOP
NOAA Digital Coast Coastal satellite imagery
Satellite Land Cover Land cover imagery data
UNAVCO SYNTHETIC APERTURE RADAR

Architecture of satellite payload applications

Machine Learning (ML) is becoming more widely utilized in space demonstration missions, such as the European Space Agency’s -sat [20], paving the path for its widespread usage in Smallsats and big institutional initiatives. The utilization of Commercial-Off-The-Shelf (COTS) solutions and the advancement ML deployment tools on radiation-tolerant processing units are fueling this trend. The satellite industry has kept a close eye on these developments.

Sample machine learning application
Figure 1. Sample machine learning application.

[24] Highlighted the most up-to-date on-board machine learning tools, frameworks, and hardware platforms for spacecraft and satellites, illustrating both the relevance of AI in future missions and the challenges that lie ahead. We discussed why benchmarks are needed in the first place, with the objective of evaluating hardware and maybe establishing a baseline of datasets and models to be employed. The importance of neural networks for Earth Observation is argued, and several hardware options are explored, with MPSoC emerging as a potential option for this use case. The basic requirements for the machine learning application benchmark are stated, and a first framework for such applications is given. Preliminary results demonstrate that ML applications (Ship Detection) for space are feasible.

The HP Data Protector Architecture

Because it is designed for use in a ground-based laboratory, it provides easy access to all key characteristics and signals. Nonetheless, converting the fundamental form to a pace-friendly layout may be done with little effort. The complete kit includes four identical boards with critical components such as the HPDP chip, SDRAM, EEPROM, SpaceWire, and Channel-Link interfaces. The demo kit, like the HPDP chip itself, is extremely programmable, allowing for arbitrary combinations of up to four boards with direct communication links between each chip. A SpaceWire chain, direct SpaceWire and JTAG access to each chip, and two bi-directional Channel-Link interfaces are all accessible at the same time for connecting to external control interfaces, data sources, and receivers.

HPDP architecture
Figure 2. HPDP architecture.

The low and high earth orbit is comprised of satellite systems launched by governments and the private sector as explained below.

Public satellite imagery system

The earth’s orbit is open to public and private investors to launch satellites for whatever reason. However, launching satellites in space should be done in a manner that one’s satellites do not endanger other satellites in space. The earth imaging satellites have captured a lot of data that has been freely available to the public for scientific use. Below is a detailed list of earth imaging programs owned and managed by different governments or unions around the globe.

  • CORONA
    • The CORONA program was launched by the US central intelligence Agency with the help of the US Air force. The Directorate of Science and Technology within the CIA spearheaded the program and used the wet film panoramic technology [48]. The satellites used two cameras for capturing earth images.
  • Landsat
    • Landsat stands as the oldest earth imagery program ever launched by humanity. The satellites recorded images with 30 meters accuracy and resolution from the early 1980s [20, 50]. The program has been updated over the years and renamed according to the generation of satellites launched in orbit. As from Landsat 5, the system began capturing earth images using thermal infrared instead optical sensors to capture data. The current generation of satellites in orbit for this program are Landstats 7 to 9.
  • MODIS
    • The program was launched in 2000 and used 36 spectral bands to capture earth imagery on an almost daily basis. The sensors are installed on Aqua and NASA Terra satellites of the US.
  • Sentinel
    • The sentinel is a constellation of satellites planned by the European Space Agency which will be launched in seven missions. Each mission is designed to perform a specific function ranging from land surface imaging using decametre optical imaging, to land and water imaging using thermal and hectometer optical imaging sensors. Currently, three missions, Sentinel 1 to 3 have been launched and are already in use.
  • ASTER
    • The program was launched in 1999 by NASA as an Earth-observing System. The Japan Space Systems, as well as the Ministry of Economy, Trade and Industry, were also involved. The system is designed to capture detailed images and maps on eland surface elevation, reflectance, and temperature [44, 52]. The program contributes immensely to NASA’s division of Science Missions and Earth Science. It has contributed immensely to understanding and forecasting volcanoes, surface climatology, hazard motioning, hydrology, land cover, and change as well as surface and ecosystem change [45, 53, 55]
  • Meteosat
    • It is weather monitoring Earth-imaging system in operation since 1981. The sensors are designed to detect weather factors such as water vapor, water bodies, clouds, and other weather-related elements. Since 1987, the Meteosat is operated by Eumetstat. Different generations of Meterostat are in operation including the Metesostat Visible and Infrared Imager which is a three-channel system using the first generation Meteosat [56]. The Spinning Enhances Visible and Infrared Imager has provided continuous data on climate change for the past decades [54, 57,58]. The next generation Meteosat, the Flexible Combined Imager will encompass the technologies used in the first and second generations.

Private satellite imagery systems

Private satellite imagery systems are owned and operated by private corporations. Although most of the systems have not been launched, they are in development phases and will capture and provide essential data for the scientific and industrial communities. The systems are listed and described below:

  • GeoEye
    • The satellite has been in operation since 2008 and captures earth images in high resolutions. Black and white images have a resolution of 16 inches while colored images have a resolution of 64 inches [36, 60]. It is owned and operated by the GeoEye Company.
  • Maxar
    • Maxar owns the WorldView-2 commercial satellite which captures high-resolution images with 0.46 meters. The satellite-only uses panchromatic mode only and can distinguish objects that are at least 46 centimeters apart. The company also owns the QuickBird satellite with a spatial resolution of 60 centimeters. The WorldView-3 satellite has the highest spatial resolution at 31 centimeters. The satellite includes both atmospheric and infrared sensors on board.
  • Airbus Intelligence
    • The Airbus Intelligence owns the Pleiades constellation that comprises two sets of high-resolution earth imaging satellites (105 and 201 meters). The two satellites are Pleiades-HR 1A and 1B. The satellites image the surface of the earth and operate as civil and military satellites. They are designed with the European defense standards in mind [62]. The Pleiades Neo constellation comprises four satellites with 0.3m spatial resolution.
  • Spot Image
    • SPOT Image has three high-resolution satellites orbiting the earth. The satellites provide a 1.5 panchromatic channel and 6-meter multi-spectral resolutions. The satellites were launched in 2011 and 2012 and are also used by the Taiwanese Formostrat-2 and South Korean Kompstat-2
  • Planet’s RapidEye
    • The RapidEye Constellation was launched in 2008 by BlackBridge, which was later acquired by Planet in 2015 [42]. The constellation contains identical, calibrated sensors, ensuring all images collected by all satellites are equal in size. The feature enables the constellation to capture up to four million square kilometers in a day. The imagery captured by the constellation is applied in agriculture, disaster management, cartography, and environmental management [63,67]. The constellation was, however, retired in 2020.
  • ImageSat International
    • The network is comprised of the smallest high-resolution earth mapping satellites orbiting at low altitudes. The satellites are designed to move fast between the target objects. The network is also called Earth resource Observation Satellites, or at times, EROS. The satellites operate near the poles in a circular sun-synchronous manner, orbiting at 210 meters. Although the satellites are mainly used for military and intelligence purposes [64], they are also used in infrastructure planning, border control, land mapping, and disaster response.
  • China Siwei
    • China Survey and Mapping technology company own and operate four satellites called the Superview. The satellites orbit the earth at 530kmaand operate in the same orbit [66,69]. The satellites are of high resolution, up to 2m and 0.5 m multispectral and panchromatic resolutions respectively [16,71,73]. The satellites are also called Gaojing-1 with the labels 1,2,3 and 4.

The Root Cause Of Need To Compress Satellite Images

As the number of satellites capturing earth images increased, the size of datasets also increased exponentially. The quality of satellite images has improved, corresponding to the creation of larger file sizes. The resolution of satellite imagery has also improved from several meters to just a few inches, indicating the images contain finer details than ever before [13,72]. Usually, image information is represented in pixels whose number is inversely proportional to spatial resolution. Each pixel represents a color in the RGB scale. The more detailed an image is, the bigger the file size[14,74,76]. as a result, the computing resources required to process such images are high and might not be available in most computing scenarios.

Without compression, a 1024 pixel x 1024 pixel x 24-bit image would take 3 MB of storage and 7 minutes to transmit over a high-speed, 64 Kbit/s ISDN connection. When a picture is compressed at a 10:1 compression ratio, the storage demand is lowered to 300 KB and the transmission time is decreased to less than 6 seconds. Seven 1 MB photos can be compressed and transmitted on a floppy disk in less time than it takes to deliver an uncompressed version of one of the original files over an AppleTalk network [3].

Large picture files continue to be a key bottleneck within systems in a distributed setting. Compression is a key component of the methods available for establishing file sizes that are manageable and transmittable. Broadening the bandwidth is also another solution, but it is very expensive. The mobility and performance of the platform are significant considerations when choosing a compression/decompression approach. The simplest technique to minimize the size of the picture file is to shrink the image directly. By reducing the image size, fewer pixels must be saved, and as a result, the file will load faster [3] [6].

Image compression uses different standards and algorithms to reduce the actual size of images without affecting their quality. lossy and lossless compression is the main digital compression technique [78,81]. As the name suggests, the lossless compression technique does not result in any loss in quality. This implies that the complete imagery details can be re-obtained after digital decompression. Although the technique may not be applicable in all scenarios, it is essential in instances where quality and accuracy are a priority[80,83]. On the other hand, the lossy compression technique produces an almost negligible loss in image quality [82]. The loss is usually minute and very hard to identify. The technique may not have any major impact on photographs but could have dire consequences if applied in detailed imagery such as satellite images. This is a result of the sensitivity of the detailed information contained in satellite images.

Image Compression Techniques

An image is usually a two-dimensional signal that has been processed by the human visual system. Images are often represented by analog impulses. However, they are changed from analog to digital form for processing, transmission, and storage in digital computers computer programs. A digital image is nothing more than a two-dimensional arrangement of pixels. Images account for a large portion of data, notably in video conferencing applications, remote sensing, healthcare. As human dependence on information and computers intensifies, so does the need for efficient ways to store and share enormous volumes of data.

Image compression seeks to reduce the data size necessary to represent an image on a digital modern media such as magnetic hard disk, optical drives, or solid-state media [32]. It is a method that produces a solid rendering of a file, lowering image transmission and storage needs [41]. Compression is accomplished by removing one or all of the following redundancies:

  • Inter-pixel
  • Psychovisual or,
  • Coding redundancy.

There are two main image compression techniques: lossless and lossy approaches. The classification of the compression technique depends on how the compressed version of the digital image is compared to the original file. If there is a deviation in the quality of data representation, the technique is called lossy, otherwise, lossless.

Lossy Compression Techniques

Lossy techniques outperform lossless approaches when it comes to compression ratios. Lossy techniques are extensively employed because the decompressed image quality is suitable for most scenarios. The reconstructed image is not the same as the original image, but the difference is hard to tell or observe.

Lossy Image compression technique
Figure 3. Lossy Image compression technique.

The outline of lossy compression algorithms is illustrated above. The steps of the prediction – transformation – deconstruction process can be successfully reversed. The quantization process causes minor loss of information. The entropy coding performed after the quantization, on the other hand, does not trigger any losses. Decoding is a backward process that reproduces the original. The compression begins by decoding the entropy on the compression image to obtain the quantized data. The inverse transformation of the image is preceded by dequantization in order to decompress the image. The performance of the lossy compression technique is influenced by the signal-to-noise ratio, compression ratio as well as encoding and decoding speed. The different lossy compression schemes are discussed below.

Transformation coding

Discrete Fourier Transform, also known as DFT and Discrete Cosine Transform or, DCT are used to compress images. The schemes transform the pixels of an image to transform coefficient, also known as frequency-domain coefficients [11,88]. These coefficients possess several beneficial qualities such as energy compaction property. The property ensures just a few compression coefficients are applied in compressing the entire compression while achieving the desired results [15, 84, 88]. This means the most important compression coefficients are kept and used, while the rest are discarded. The coefficients are then used in the decompression process.

Vector quantization

The main concept behind this method is to create a dictionary of fixed-size vectors known as code vectors. A vector is often comprised of several pixel values. An image is usually divided into non-overlapping vectors or blocks known as image vectors. The index of each image vector is used to compress the original image [90]. It implies that every image is replaced by indices that can further be encoded and compressed.

Fractal coding

The technique breaks down an image using a convectional processing approach such as edge detection, texture and spectrum analysis, and color separation [94]. A fractal library contains details of all image sections. The library also contains Iterated Function System (IFS) codes containing compacted integers [100]. The schematic approach is employed to determine the codes for a given image such that, with the application of the IFS codes, the resulting image is a close approximation of the original image. The technique is best suited for images with a high self-similarity index [99].

Block Truncation Coding

The technique segments an image into different pixel blocks. The reconstruction and threshold values for each block are usually specified. The mean value for the values in each pixel block is the threshold for that particular block. To encode the image, the values of the block that are more than the threshold are replaced with one, while those that are less than or equal to the threshold are replaced with a zero. The result is called a bitmap and can be reconstructed by reversing the process.

Subband coding

The image is examined in this technique to yield components comprising frequencies in well-defined bands, known as sub-bands. Consequently, coding and quantization are done to each of the bands. The benefit of this method is that the coding and quantization for each of the sub-bands can be achieved independently.

Lossless Compression Techniques

The original image may be perfectly retrieved from the compressed (encoded) image using lossless compression techniques. The techniques are also known as noiseless because they do not introduce noise into the transmission of the image. It is also referred to as entropy coding because it eliminates redundancy using decomposition techniques. The compression technique is only employed in a few critical applications such as medicine and intelligence. The techniques include:

Run-length encoding

This is a straightforward compression technique for linear data. It comes in handy when dealing with repetitious data. This approach uses shorter symbols to replace runs of similar symbols (pixels). A grayscale image’s run-length code is represented by the sequence Vi, Ri, where Vi is the pixel intensity and Ri is the number of consecutive pixels associated with the intensity Vi. The relationship is illustrated in the figure below. When both Vi and Ri are represented by one byte, a span of 12 pixels can be coded in 8 bytes, producing a 1:5 compression ratio.

Run-length encoding
Figure 4. Run-length encoding.

Huffman encoding

This is a method for compressing data symbols with respect to their statistical probabilities. The image’s pixels are viewed, treated, and processed as symbols. The number of bits assigned to symbols is inversely proportional to the frequency. That is the higher the frequency, the smaller the number of buts. The prefix code in this case is the Huffman code. This indicates that any symbol’s binary code can not be the prefix of any other symbol’s code. Lossy compression techniques are mainly used in the early stages of image encoding, while Huffman coding is used as the last step in most image coding standards.

LZW coding

Lempel-Ziv-Welch (LZW) uses dictionaries to compress image data.

The use of dictionaries for coding might be either static or dynamic. The dictionary is fixed during the encoding and decoding stages in static dictionary coding. The dictionary is updated on the fly in dynamic dictionary coding. LZW is a commonly used compression algorithm in the computer industry, and it is implemented as a UNIX compress command.

Area coding

Area coding is a more advanced version of run-length coding that takes into account the two-dimensional nature of pictures. This is a big step forward from the previous lossless approaches. It makes little sense to code a picture as a sequential stream because it is an array of sequences that make up a two-dimensional entity. Area coding methods look for rectangular areas with similar properties. These areas are coded descriptively as a two-pointed element with a specific structure. This sort of coding is very efficient, but it has the drawback of being a nonlinear approach that can’t be implemented in hardware. As a result, the compression time performance is not as aggressive as the compression ratio.

Pros And Cons Of Satellite Image Compression

For a variety of reasons, many of the images available on the Internet today have been compressed. Users can benefit from image compression since images load quicker and take up less space on a storage device. Image compression does not lower the actual size of an image; rather, it reduces the actual data components of the image. One of the most significant advantages of satellite imagery is the capacity to examine huge areas of the Earth fast. At the same time, the present satellite data’s coverage restrictions are visible [98]. Many natural phenomena with higher spatial and temporal heterogeneity are not adequately caught by polar-orbiting imagers in Low Earth Orbit (LEO), which attain global coverage in a minimum of one day (but usually two or more days). This constraint is addressed by high-orbit geostationary observations (GEO), which provide many daily views of the same target. There is, however, a trade-off between satellite picture resolution and spatial coverage (typically, larger coverage results in lower resolution). For many applications, obtaining observations with both large geographic-temporal coverage and high spatial resolution is required, but it is also quite difficult. As a result, new inventions, supplemental data, and synergies of complementary observations may be called for in the design of satellite sensors to tackle specific objects or issues. In the following part, we’ll go over this in more detail.

Although satellite observations have demonstrated their excellent capabilities, the data currently delivered by our satellite equipment has low information richness for many applications. As a result, it is time to deploy new sensors with expanded capabilities. For example, it is well known that Multi-Angular Polarimeters (MAPs) provide the best data for characterizing detailed columnar properties of atmospheric aerosol and cloud [97].

Several advanced parametric missions, including Hyper-Angular Rainbow Polarimeter (HARP), Multi-View Multi-Channel Multi-Polarization Imaging mission (3MI) on MetOp-SG satellite [95], Spectropolarimeter for Planetary Exploration (Spex), and Multi-Angle Imager for Aerosols (MAIA) instrument [96] as part of NASA PACE mission. More so, the China National Space Administration (CNSA) has invested heavily in polarimetric sensors [94]. The MAI/TG-2, CAPI/TanSat, DPC/GF-5, and SMAC/GFDM are among the polarimetric remote sensing instruments that CNSA has recently released, with the POSP, PCF, and DPC-Lidar to follow in the future years [89]. The principles of these sensors, their technological designs, and algorithm development have all been extensively explored and tested using aerial prototypes [91]. Below are some major impacts of image compression and how they relate to satellite imagery.

Reduction in Size

The most important benefit of picture compression is the reduction in file size. You can keep compressing the image until it’s the size you want it to be, depending on the file type you’re dealing with [40]. Unless you adjust the image’s physical size with an image editor, this implies the image occupies less space on the storage media while maintaining the same physical size. This file size decrease is ideal for the Internet since it allows webmasters to produce image-rich sites without consuming a lot of storage space or bandwidth.

Slow Devices

Large, uncompressed images may take a long time to load on most electronic devices, for example, digital cameras and computers. Compact disks drives, for instance, only read data at a certain rate and cannot show huge graphics in real-time. Compressed pictures are also required for a fully working website on some web hosts that send data slowly. Uncompressed data will also take a long time to load on other storage devices, such as hard disks. Image compression helps data to load more quickly on slower devices.

Degradation

When compressing images, one may see image deterioration, which means the image’s quality has deteriorated. In most image formats such as GIF or PNG, the image data is preserved even though the image quality has deteriorated. A slight degradation in satellite images could have devastating consequences in analyzing the images[87]. If you need to show someone a high-resolution image, whether large or little, image compression will be a disadvantage.

Data loss

Compressing some image formats reduces the file size which means a part of the file is permanently deleted. As a result, it is important to keep backup copies of the uncompressed images which nullify the importance of compressing the images in the first place [85, 93]. Instead of saving storage space, keeping a backup copy occupies more storage space.

Problems with satellite images

Satellite databases are big and image processing (generating meaningful pictures from raw data) is time-consuming since the total amount of land on Earth is so large and resolution is so high. [82.87] Image de-striping, for example, is frequently necessary as part of the preprocessing process. Weather conditions might impact image quality depending on the sensor used: for instance, it is difficult to get photographs for regions with regular cloud cover, such as mountaintops. Third companies usually process satellite image datasets that are publicly available for visual or scientific commercial usage for these reasons. Commercial satellite businesses do not release their images to the public or sell it; instead, a license is required to utilize such imagery. As a result, the ability to use commercial satellite images to create derivative works is limited. Some people have expressed worries about their privacy since they do not want their property to be visible from above. In its Frequently Asked Questions (FAQ) section, Google Maps answers to similar issues with the disclaimer: “We recognize your privacy concerns… The visuals displayed by Google Maps are identical to those viewed by anyone flying over or driving by a given geographic place.

Development of the State-of-Art Data Processing Approaches of the Next Generation Satellite Images

An important factor that influences the final product’s quality is the quality of a remote-sensing capturing method used. In reality, the quality of the generated remote data cannot be significantly enhanced once the equipment has been installed, although retrieval techniques are constantly improved. The eventual remote sensing output may change significantly not just as a result of ingesting data from many types of equipment, but also as a result of improved retrieval ideas. In this context, in the last decade, a new generation of satellite image processing algorithms has made substantial progress. New techniques, for example, are capable of extracting a huge number of parameters and rely on rapid and precise atmospheric modeling (rather than precomputed Look-Up-Tables, or LUT). Furthermore, simultaneous retrieval of aerosol characteristics is possible. In addition, retrieval of aerosol characteristics in conjunction with land surface and/or cloud properties has been introduced [79, 86]. Finally, the combined retrieval of CO2 and aerosol characteristics, as stated above in the context of the CO2M EU/Copernicus project, is a potential technique for lowering the influence of aerosol pollution on the resultant CO2 product.

For reliable cloud remote sensing from space, certain computational hurdles remain. It is necessary to have an efficient and accurate radiative transfer model. While independent column approximation is commonly used to retrieve optical depth and cloud droplet size, cloud top roughness causes 3-D radiative transfer (RT) effects which can cause retrieval biases [79, 81]. Starting with linking their retrieval into a joint framework, the 3-D character of clouds becomes more of a concern for exploring the interlinkage between aerosols and clouds, for example, near cloud boundaries. In this context, a pressing need exists for the development of an inversion-targeted quick but accurate 3D RT model for optically and geometrically multiplex media, with the inclusion of the spectral signature of gas absorption and correct adoption of the cloud particle scattering model. For proper interpretation of all satellite pictures, the development of credible 3D radiative models is also required to account for horizontal variability of the land surface [77]. Another important unsolved problem is generating 3D cloud fields to represent 3D radiation fields, which might be solved by combining active and passive sensors [23,24,25].

Cirrus clouds have an important role in weather and climate processes, according to several observational and modeling studies [73,75]. Cirrus clouds, despite their visual thinness [51], have a worldwide presence, control Earth’s radiation, and play a vital role in the study of climatic systems. Cirrus particles have very irregular forms, and their single-scattering characteristics, such as single scattering albedo and scattering phase functions, differ dramatically from spherical particles [59.73]. These irregular forms can produce significant biases in the cloud and aerosol retrievals if an algorithm does not recognize them [55.59]. As a result, identifying a realistic cirrus particle model and incorporating it into aerosol retrievals is a viable path to pursue. Furthermore, advancement in the global chemical and climate transport models is highly linked to the utilization of satellite data (CTM). When observations are unavailable, for example, trustworthy aerosol retrievals can be included in Chemical Transport Models (CTMs) to give precise aerosol loadings [50]. On the same hand, spectral and polarimetric data have a lot of sensitivity when it comes to constraining aerosol type [43,47,49,51], and satellite data can help improve the study of transport models emitting atmospheric components [33]. As a result, combining the processing of satellite data with existing modeled data is another interesting study area for satellite remote sensing advancement.

Finally, machine learning techniques are now being employed more often to identify patterns and insights from geospatial and remote sensing data [27, 39]. Because it presents techniques that can “learn” from data, find patterns, and make judgments with minimum human interaction, this area of artificial intelligence is particularly suited and appealing for the study and exposition of Earth observation data. Deep neural networks, in particular, have lately been employed in remote sensing investigations, particularly for the processing and interpretation of large volumes of data. Such methods demonstrate the possibility for automatically extracting Spatio-temporal linkages and gaining additional knowledge useful for enhancing predictions and modeling of observable physical processes over various timeframes. These approaches are particularly promising for satellite data interpretation, especially when data-driven machine learning is combined with physical process models [83]

Methodology (Preliminary)

Research philosophy

Every researcher’s approach to the investigation is unique and is usually driven by the research goals and other factors. Mill [9,50] is considered to have been the first to challenge the representatives of the social sciences to compete with classical sciences, predicting that if his counsel was adopted, these fields would suddenly mature. Similarly, their education arose from intellectual and theological frameworks that restricted them [5,17]. On other grounds, social sciences adopted this counsel [29]. The growth of research knowledge, nature, and assumptions might be regarded as research philosophy [7]. The assumption appears to be a preliminary reasoning statement, yet it is dependent on the philosophizer’s experience and knowledge [5]. The research philosophy is based on four key factors of satellite images: scale, the north, texture, shapes, and patterns, and the application of prior knowledge on the maps.

Philosophy and scientific research paradigm is influenced by a variety of elements, including the worldview, one’s individual’s mental model, varied perceptions, attitudes, and many beliefs towards reality perception, among others. In order to supply strong reasoning and language for achieving accurate findings, researchers’ opinions and values are vital in this notion. In some situations, the researcher’s position might have a major influence on the study outcome [11]. Experts in various fields of natural science are able to reach broad opinions about which improvements are true “discoveries” through open discussion. In the social sciences, reaching such a consensus is challenging.

Research approach

Study approaches are the strategies and processes that cover everything from the formulated assumptions to precise data collecting, analysis, and interpretation methodologies. This strategy necessitates a number of decisions, none of which must be made in the sequence in which they make favor the researcher or in which they are presented here. The ultimate choice concerns whether the method should be employed to investigate a subject. The philosophical beliefs that the scholar takes to the study, and the research designs and particular research methodologies. Data collection, analysis, interpretation, and presentation methodologies should jointly play a role in choosing a research methodology. This is both descriptive and explanatory research as it seeks to establish the current and potential technologies for the compression of satellite images.

Methodology

The processes or strategies employed in finding, selecting, processing, and analyzing information on a particular topic are referred to as research methodology [7]. The methodology plays an important role in exhibiting the validity of the research. Usually, research methodology defines data collection and processing tools, data sources, population and sample sizes as well as research design [90]. The methodology can be compared to a formula in mathematical expressions. In this case, it defines how the research will be carried out, the variables needed and how the projected outcome will be presented. Since the research relies on secondary data, it is more qualitative than quantitative one. As a result, the research as approached as a qualitative study seeking to establish the best image compression technologies for satellite images.

Neural Image Compression

Image compression is the process of shrinking an image’s pixels, color components, or dimensions in order to minimize its overall file size. It decreases the amount of data they have to store and process[47]. Advanced algorithms for image optimization can detect the most relevant visual elements while ignoring the less relevant ones. Image compression is generally administered by reducing spatial redundancy in the visual data and consequent reconstruction of the image. In general, an autoencoder refers to a type of neural network, which transforms the input into a code (or ‘bottleneck’) and, consequently, reconstructs it into a finished product [33]. The visual representation of the process is provided in the figure below.

Scheme

At present, this framework is utilized in various industries for image compression, image classification, anomaly detection, etc. [1]. Furthermore, autoencoders can be used for both unsupervised (no classified data) and supervised analysis [20]. Another advantage of the framework, compared to traditional algorithms of image compression, is the high accuracy of the outputs and similarity to other computer-vision models [65]. As a result, it is possible to transform the existing networks of anomaly detection or image classification into image compression, which makes it more convenient for satellite communication [70]. Ultimately, compared to traditional algorithms, neural image compression is a highly effective framework and is applicable to satellite imagery.

Classical and Variational Encoders

Classical and variational autoencoders are the two prominent frameworks that are utilized for neural image compression. An instance of the former type is a Multi-Layer Perception (MLP) and refers to the standard autoencoder architecture depicted in Fig. 1 [27]. Variational autoencoders utilize an additional layer of encoding, which is specifically prepared, depending on the objectives [37]. As a result, variational autoencoders demonstrate positive results on both image compression and change detection [47,53,57]. For instance, the improved variational autoencoder shows better accuracy than most traditional algorithms and standard convolutional autoencoders for desertification detection based on satellite imagery [65,67]. Therefore, the implementation of variational autoencoders might be highly beneficial for various uses in the industry.

Specific Models

Most contemporary neural networks are based on the classical and variational models of autoencoders. Consequently, a derivative from these types – a recurrent neural network (RNN) – is the basis for a large number of frameworks [87]. The experts primarily adjust the quantization and entropy coding of neural image compression to achieve the best possible accuracy of reconstruction and storage efficiency [97]. Furthermore, while such autoencoders generally perform adequately based on specialized hardware, some models demonstrate positive results even on resource-constrained edge systems [17]. As a result, regardless of the conditions, it is possible to adjust the frameworks according to the objectives. The technological advancement in this area would allow closing the research gaps in the satellite industry. For instance, comprehensive image compression methods might be used to improve the global agricultural monitoring systems. From these considerations, the industry might significantly benefit from the innovative framework of satellite imagery compression.

Multi-Layer Perceptrons

Multi-layer perceptrons have two main layers: the input and output neurons. There is, however, a hidden layer between the two (input and output). Theoretically, the more hidden layers the more effective the algorithm in dimension reduction and data compression [15,18]. MLP modifies the spatial data unitarily. In 1988, the first MLP algorithm for image compression was released. The novel algorithm comprised the conventional techniques including binary coding, spatial domain transformation, and quantization, all operating as one single file compression tool [28,35]. The approach employed neural networks to establish the most suitable combination of binary codes but could not adjust the parameters to adjustable compression ratio [61,75]. The program has been advanced by introducing predictive algorithms to predict all pixel values with respect to their neighbors. The program also includes backpropagation to minimize the mean square error between original and projected pixel values [18, 68].

Data collections

Once the aircraft lands, the data collected during airborne remote sensing missions may be downloaded. After then, it may be processed and transmitted to the intended recipient. However, because the satellite remains in orbit throughout its lifetime, data collected from satellite platforms must be electronically sent to Earth. If the data is immediately needed on the ground, the technology developed to do this can also be utilized by an airborne platform.

For transferring data from satellites to ground stations, there are three basic ways. The choice of transmission depends on the position of the satellite and the ground station. First, if the ground station is visible to the satellite, the data is sent directly. In the second option, the satellite can store the data and wait until the ground station appears in its line of sight. The third option is applicable for urgent purposes. If the satellite capturing data is not in line with the ground station, the data can be shared with another satellite that forwards the signal to the ground station [31, 33].

Several earth imaging programs, mostly owned by governments are available to the public for scientific, academic, and exploratory use. The datasets exist in large sizes and are best accessed through cloud computing services such as Amazon, Azure, Google Colab, and Kaggle. The services provide high computing and storage services that can not be accessed on personal computers. There are several ground pickup stations around the world to facilitate the collection and processing of satellite signals. Usually, satellites send raw digital data which must be processed to the appropriate geometric, atmospheric, and systematic distortions. The data is also converted into the right format for storage in the various satellite imagery datasets. The transformed data is stored in conventional storage devices such as magnetic hard disks, compact Disks, or solid-state media such as thumb drives [21]. Visible Satellite Images (VIS) are captured by reflecting the sun’s light rays. The images represent what is visible to the human eye in all colors as they appear. However, the images are presented in 2D and are hence static. Most water bodies appear in blue while forest covers appear in green. VIS images are easier to analyze from a human point of view [4]. The images are also easier to compress using both lossy and lossless techniques.

Satellite imagery datasets (IR and VIS images)

Infrared Satellite imagery acts as a temperature map in that weather satellites detect heat energy in the infrared spectrum [62]. Object visible to the human eye such s water, land surfaces, and clouds are displayed on the satellite image depending on their temperature. Dark colors represent warm temperatures while light colors represent low temperatures [52]. A temperature scale usually accompanies the satellite image for clarification and interpretation purposes. The IR satellite image dataset can be found on the Sentinel Open Access Hub.

Analysis Technique(s)

Table I provides the cross-comparison of some of the most prominent frameworks utilized in satellite imagery compression.

Table 2. Performance comparison of different image compression techniques.

Method Performance
Features Advantages
RNN-based Approach [6] Innovative approach via analysis and synthesis block, based on Generalized Divisive Normalization (GDN) Outperforms traditional algorithms, such as BPG and JPEG, on most parameters according to the Kodak benchmark.
Supervised Image Compression for Split Computing [7] Utilizes split computing and knowledge distillation to create a lightweight feature extractor for consequent reconstruction Can be effectively used for resource-constrained edge computing systems; demonstrates positive results on input and feature compression
Slimmable Compressive Autoencoders [8] Utilizes only non-parametric operations in compressive autoencoders Effective for resource-constrained edge computing systems; adjustable to various purposes and settings; low memory footprint, costs, and latency
Generative Adversarial Networks [9] Utilize contemporary quantization-aware training for GAN architectures Reveal the efficiency of GAN quantization, which can be used in consequent RNNs and VAEs
Rate-Distortion Optimization-based Network (RDONet) [10] Implements dynamic block portioning and additional hierarchical levels to maximally optimize rate-distortion component Outperforms traditional algorithms while saving up to 20% of RD

Conclusion and Future Work

The issue of data storage efficiency is critical to satellite imagery due to the extensive processing load and analysis of the information. From these considerations, it is essential to continually develop innovative methods of image compression and imagery processing. The current paper has analyzed the most prominent contemporary frameworks and cross-compared their performance. Ultimately, some of the most effective methods are the RNN-based approach, Slimmable CAE, GAN methods, and RDONet. In future work, it is possible to conduct a more thorough analysis of the existing methods and evaluate the innovative approaches.

References

Polar Geospatial Center. “Imagery processing options”. 

S. Kim and M. Smolin. “Neural mmage compression”. Web.

Patuzzo Fabrizio. “A comparison of classical and variational autoencoders for anomaly detection”. In: arXiv preprint arXiv: 2009.13793v1 (2020).

Ekaterina Kalinicheva et al. “Neutral network autoencoder for change detection in satellite image time series,” IEEE International Conference on Electronics, Circuits and Systems (ICECS), 2018, doi:10.1109/icecs.2018.8617850

Yacine Zerrouki et al. “Desertification detection using an improved variational autoencoder-based approach through ETM-landsat satellite data,” IEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 14, 2021.

Islam Khawar et al. “Image compression with recurrent neural network and generalized divisive normalization”. In arXiv preprint arXiv: 2109.01999 (2021).

Yoshitomo Matsubara et al. “Supervised compression for resource-constrained edge computing systems”. In: arXiv preprint arXiv: 2108.11898 (2021).

Fai Yang et al. “Slimmable compressive autoencoders for practical neural image compression”, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021.

Pavel Andreev et al. “Quantization of generative adversarial networks for efficient inference: A methodological study”. In: arXiv preprint arXiv: 2108.13996 (2021).

Fabian Brand et al. “Rate-distortion optimized learning-based image compression using an adaptive hierarchical autoencoder with conditional hyperprior”, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021.

T. Goldstein e C. Studer, «Phasemax: convex phase retrieval via basis pursuit,» IEEE Transaction on Information Theory, n. 4, pp. 2675-2689, 2018.

K. Cao, X. Zhou e Y. Cheng, «7 Improved focal underdetermined system solver method for radar coincidence imaging with model mismatch,» Journal of Electronic Imaging, n. 26, 2017.

R. Zhao, X. Lai, X. Hong e Z. Lin, «A matrix-based IRLS algorithm for the least Lp-norm design of 2-d firfilters,» Multidimensional Systems and Signal Processing, n. 2, pp. 1-15, 2017.

G. Oliveri, M. Salucci, N. Anselmi e A. Massa, «Compressive sensing as applied to inverse problems for imaging: theory applications, current trends, and open challenges,» IEEE Antennas and Propagation Magazine, n. 59, pp. 34-46, 2017.

A. Adler, D. Boublil, M. Elad e M. Zibulevsky, «A deep learning approach to blockbased compressed sensing of images,» in ICASSP, 2017.

M. Iliadis, L. Spinoulas e A. K. Katsaggelos, «Deep fully-connected networks for video compressive,» Digital Signal Processing, vol. 72, pp. 9-18, 2018.

K. Kulkarni, S. Lohit, P. Turaga, R. Kerviche e A. Ashok, «ReconNet: Non-iterative reconstruction of images from compressively sensed measurements,» in CVPR, 2016.

A. Mousavi e R. G. Baraniuk, «Learning to invert: Signal recovery via deep convolutional networks,» in ICASSP, 2017.

J. Zhang e B. Ghanem, «ISTA-Net: Interpretable Optimization-Inspired Deep Network for Image Compressive Sensing,» in 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2018.

W. Shi, F. Jiang, S. Liu e D. Zhao, «Scalable Convolutional Neural Network for Image».

Kevin McManamon, Richard Lancaster, and Nuno Silva. ExoMars Rover Vehicle Perception System Architecture and Test Results. Symposium on Advanced Space Technologies in Robotics and Automation (ASTRA), 2017.

M Mammarella, CA Paissoni, N Viola, A Denaro,E Gargioli, and Massobrio. The Lunar Space Tug: A sustainable bridge between low Earth orbits and the Cislunar Habitat. Acta Astronautica, 138:102 – 117, 2017.

Satomi Kawamoto, Yasushi Ohkawa, Hiroyuki Okamoto, Kentaroh Iki, Teppei Okumura, Yasuhiro Katayama, Masato Hayashi, Yuta Horikawa, Hiroki Kato, and Naomi Murakami. Current Status of Re- search and Development on Active Debris Removal at JAXA. 7th European Conference on Space Debris (SDC7), 2017.

Bob Balaram, Timothy Canham, Courtney Duncan, Håvard F Grip, Wayne Johnson, Justin Maki, Amelia Quon, Ryan Stern, and David Zhu. Mars Helicopter Technology demonstrator. In AIAA Atmospheric Flight Mechanics Conference, 2018.

Powell, Wesley and Campola, Michael and Sheets Teresa and Davidson, Abigail and Welsh, Sebastian. Commercial Off-The-Shelf GPU Qualification for Space Applications. Technical report, NASA, 2018.

Edward Wyrwas. Body of Knowledge for Graphics Processing Units (GPUs). Technical report, NASA, 2018.

Leonidas Kosmidis, Jérôme Lachaize, Jaume Abella, Olivier Notebaert, Francisco J Cazorla, and David Steenari. GPU4S: Embedded GPUs for Space. In Digital System Design (DSD) Euromicro Conference, 2019.

Nan Li, Aimin Xiao, Mengxi Yu, Jianquan Zhang, and Wenbo Dong. Application of GPU On-orbit and Self-adaptive Scheduling by its Internal Thermal Sensor. In International Astronautical Congress (IAC), 2018.

Daniele Luchena, Vincenzo Schiattarella, Dario Spiller, Marco Moriani, and Fabio Curti. A New Complementary Multi-Core Data Processor for Space Applications. In International Astronautical Congress (IAC), 2018.

I. Rodriguez et al. GPU4S Bench: Design and Implementation of an Open GPU Benchmarking Suite for Space On-board Processing. Technical Report PC-DAC-RR-CAP-2019-1, Universitat Politecnica de Catalunya. Web.

Iván Rodriguez, Leonidas Kosmidis, Olivier Notebaert, Francisco J Cazorla, and avid Steenari. An On-board Algorithm Implementation on an Embedded GPU: A Space case Study. In 2020 Design, Automation & Test in Europe Conference & Exhibition (DATE), pages 1718–1719, 2020.

Iván Rodriguez, Alvaro Jover, Leonidas Kosmidis, and David Steenari. On the Embedded GPU Parallelisation of On-Board CCSDS Compressors: a Benchmarking Approach.International Workshop on On-Board Pay-load Data Compression (OBPDC), 2020.

L. Kosmidis, I. Rodriguez, A. Jover, S. Alcaide, J. Lachaize, J. Abella, O. Notebaert, F. J. Cazorla, and D. Steenari. GPU4S: Embedded GPUs in Space – Latest Project Updates. Elsevier Microprocessors and Microsystems, 77, 2020.

AMD. ROCm Developers tools HIP. 2019.

Marc Benito, Matina Maria Trompouki, Leonidas Kosmidis, Juan David Garcia, Sergio Carretero, and Ken Wenger. Comparison of GPU Computing Methodologies for Safety-Critical Systems: An Avionics Case Study. In Design, Automation & Test in Europe Conference & Exhibition (DATE), 2021.

Yan-Tyng Chang, Robert T Hood, Haoqiang Jin, Steve W Heistand, Samson H Cheung, Mohammad J Djomehri, Gabriele Jost, and Daniel S Kokron. Evaluating the Suitability of Commercial Clouds for NASA’s High Performance Computing Applications: A Trade Study. Technical Report NAS-2018-01, NASA, 2018.

Roland Brochard, Jérémy Lebreton, Cyril Robin, Keyvan Kanani, Grégory Jonniaux, Aurore Masson, Noela Despré, and Ahmad Berjaoui. Scientific Image Rendering for Space Scenes with the SurRender Software. In International Astronautical Congress, 2018.

Steven B Goldberg, Mark W Maimone, and Larry Matthies. Stereo Vision and Rover Navigation Software for Planetary Exploration. In IEEE Aerospace Conference, 2002.

The Consultative Committee for Space Data Systems. Image Data Compression Recommended Standard CCSDS 122.0-B-2, 2017.

Vipul Mani, Amogh Kulkarni, Mahish Guru, Raghav Pathak, and Shashank Pathak. Exploration of Mars through an Autonomous and Machine Learning enabled Constellation of Drones. InInternational Astronautical Congress, 2018.

Ewan Reid and Michele Faragalli and Kaizad Raimalwala and Evan Smal. Machine Learning Applications for Safe and Efficient Rover Mobility Operations and Planning. In International Astronautical Congress (AIC), 2018.

CNES. CERES: Three satellites to boost Frances intelligence capabilities, 2019. Web.

David Steenari, Leonidas Kosmidis, Ivan Rodriquez, Alvaro Jover, and Kyra Förster. OBPMark (On-Board Processing Benchmarks) – Open Source Computational Performance Benchmarks for Space Applications. In European Workshop on On-Board Data Processing (OBDP), 2021.

Iván Rodriguez. An On-board Algorithm Implementation on an Embedded GPU: A Space Case Study. Master’s thesis, Universitat Politècnicade Catalunya (UPC), Barcelona, Spain, 2021.

Alvaro Jover-Alvarez, Alejandro J. Calderon, Ivan Rodriguez, Kosmidis Leonidas, Kazi Asifuzzaman, Patrick Uven, Kim Grttner, Tomaso Poggi, and Irune Agirre. The UP2DATE Baseline Research Platforms. In Proceedings of the Design, Automation & Test in Europe (DATE), 2021.

S. Alcaide, L. Kosmidis, C. Hernandez, and J. Abella. High-Integrity GPU Designs for Critical Real-Time Automotive Systems. In 2019 Design, Automation Test in Europe Conference Exhibition (DATE), 2019.

S. Alcaide, L. Kosmidis, C. Hernandez, and J. Abella. Software-only Diverse redundancy on GPUs for Autonomous Driving Platforms. In 2019 IEEE 25th International Symposium on On-Line Testing and Robust System Design (IOLTS), pages 90–96, 2019.

Fredrik C Bruhn, Nandinbaatar Tsog, Fabian Kunkel, Oskar Flordal, and Ian Troxel. Enabling Radiation Tolerant Heterogeneous GPU-based Onboard Data Processing in Space. CEAS Space Journal, pages 1–14, 2020.

D. Steenari, K. Förster, D. O’Callaghan, C. Hay, M. Cebecauer, M. Ireland, S. McBreen, M. Tali, and R. Camarero, “Survey of high-performance processors and FPGAs for on-board processing and machine learning applications,” in OBDP2021, 2nd European Workshop on On-Board Data Processing. ESA/CNES/DLR, 2021.

G. Lentaris, K. Maragos, I. Stratakos, L. Papadopoulos, O. Papaniko-laou, D. Soudris, M. Lourakis, X. Zabulis, D. Gonzalez-Arjona, and G. Furano, “High-performance embedded computing in space: Evaluation of platforms for vision-based navigation,” Journal of Aerospace Information Systems, vol. 15, no. 4, pp. 178–192, 2018.

F. Wartel and A. Certain1, “Hp4s: High performance parallel payload processing for pace,” in OBDP2021, 2nd European Workshop on On-Board Data Processing. ESA/CNES/DLR, 2021.

M. Ghiglione, V. Serra, T. Helfers, R. C. Amorin, and R. Martins, “Machine learning application benchmark for satellite on-board data processing,” in OBDP2021, 2nd European Workshop on On-Board Data Processing. ESA/CNES/DLR, 2021.

L. Kosmidis, J. Lachaize, J. Abella, O. Notebaert, F. J. Cazorla, and D. Steenari, “GPU4S: Embedded GPUs in space,” in 2019 22nd Euromicro Conference on Digital System Design (DSD). IEEE, 2019, pp. 399–405.

L. Kosmidis, I. Rodriguez, Á. Jover, S. Alcaide, J. Lachaize, J. Abella, O. Notebaert, F. J. Cazorla, and D. Steenari, “Gpu4s: Embedded gpus in space-latest project updates,” Microprocessors and Microsystems, vol. 77, p. 103143, 2020.

I. Rodriguez-Ferrández and L. Kosmidis, “Euclid-nir gpu: Embedded gpu-accelerated near-infrared image processing for on-board space systems.”

I. Rodriguez, L. Kosmidis, J. Lachaize, O. Notebaert, and D. Steenari, “GPU4S bench: Design and implementation of an open GPU bench-marking suite for space on-board processing.”

Embedded Microprocessor Benchmark Consortium, “CoreMark: An EEMBC Benchmark,” 2018.

J. Jalle, M. Hjorth, J. Andersson, R. Weigand, and L. Fossati, “DSP benchmark results of the GR740 rad-hard quad-core leon4ft,” 2016.

Cobham Gaisler, “GR740-VALT-0010, GR740 technical note on bench-marking and validation,” 2019.

P. Mattson, V. J. Reddi, C. Cheng, C. Coleman, G. Diamos, D. Kanter, P. Micikevicius, D. Patterson, G. Schmuelling, H. Tang et al., “Mlperf: An industry standard benchmark suite for machine learning performance,” IEEE Micro, vol. 40, no. 2, pp. 8–16, 2020.

I. Rodriguez, A. Jover, L. Kosmidis, and D. Steenari, “On the embedded GPU parallelisation of on-board CCSDS compressors: a benchmarking approach,” in OBPDC2020. ESA, 2019.

“CCSDS 121.0-B-3, Lossless Data Compression, Blue Book, Issue 3, Recommended Standard,” 2020.

“CCSDS 120.0-G-3, Lossless Data Compression, Green Book, Informational Report,” 2013.

“CCSDS 122.0-B-2, Image Data Compression, Blue Book, Issue 2, Recommended Standard,” 2017.

“CCSDS 123.0-B-2, Low-Complexity and Near-Lossless Multispectral and Hyperspectral Image Compression, Blue Book, Issue 2, Recommended Standard,” 2019.

I. Blanes, A. Kiely, M. Hernández-Cabronero, and J. Serra-Sagristà, “Performance impact of parameter tuning on the ccsds-123.0-b-2 low-complexity lossless and near-lossless multispectral and hyperspectral image compression standard,” Remote Sensing, vol. 11, no. 11, p. 1390, 2019.

“CCSDS 352.0-B-2, CCSDS Cryptographic Algortihms, Blue Book, Issue 2, Recommended Standard,” 2019.

Fashion-MNIST: A Novel Image Dataset for Benchmarking Machine Learning Algorithms. Web.

MNIST: Handwritten digit database. 

C. Zhou and C. Paffenroth: Anomaly Detection with Robust Deep Autoencoders. In: Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. pp. 665–674 (2017).

H.H.W.J. Bosman and G. Iacca and A. Tejada and H. J. Wrtche and A. Liotta: Spatial Anomaly Detection in Sensor Networks using Neighborhood Information. Information Fusion 33, 41–56 (2017).

M. Abadi et al.: TensorFlow: Large-Scale Machine Learning on Heterogeneous Systems (2016).

T.C. Yeam and N. Ismail and K. Mashiko and T. Matsuzaki: FPGA Implementation of Extreme Learning Machine System for Classification. In: Proceedings of the IEEE Region 10 Conference. pp. 1868–1873 (2017)

The Consultative Committee for Space Data Systems CCSDS. Low-Complexity Lossless Multispectral and Hyperspectral Image Compression, Recommended Standard CCSDS 123.0-B-2, Blue Book, 2019.

Abboud, Ali J., Ali N. Albu-Rghaif, and Abbood Kirebut Jassim. “Balancing compression and encryption of satellite imagery.” International Journal of Electrical and Computer Engineering 8, no. 5 (2018): 3568.

Afjal, Masud Ibn, Md Al Mamun, and Md Palash Uddin. “Band reordering heuristics for lossless satellite image compression with 3D-CALIC and CCSDS.” Journal of Visual Communication and Image Representation 59 (2019): 514-526.

Ahn, Kyohoon, Sung-Hun Lee, In-Kyu Park, and Hwan-Seok Yang. “Simulation of a laser tomography adaptive optics with Rayleigh laser guide stars for the satellite imaging system.” Current Optics and Photonics 5, no. 2 (2021): 101-113.

Akshay, S., T. K. Mytravarun, N. Manohar, and M. A. Pranav. “Satellite image classification for detecting unused landscape using CNN.” In 2020 International Conference on Electronics and Sustainable Communication Systems (ICESC), pp. 215-222. IEEE, 2020.

Asokan, Anju, J. Anitha, Monica Ciobanu, Andrei Gabor, Antoanela Naaji, and D. Jude Hemanth. “Image processing techniques for analysis of satellite images for historical maps classification—An overview.” Applied Sciences 10, no. 12 (2020): 4207.

Bai, Yuanchao, Xianming Liu, Wangmeng Zuo, Yaowei Wang, and Xiangyang Ji. “Learning scalable ly=-constrained near-lossless image compression via joint lossy image and residual compression.” In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 11946-11955. 2021.

Bass, L. P., Yu A. Plastinin, and I. Yu Skryabysheva. “Machine learning in problems involved in processing satellite images.” Measurement Techniques 63, no. 12 (2021): 950-958.

Bauer, Susanne E., Kostas Tsigaridis, Greg Faluvegi, Maxwell Kelley, Ken K. Lo, Ron L. Miller, Larissa Nazarenko, Gavin A. Schmidt, and Jingbo Wu. “Historical (1850–2014) aerosol evolution and role on climate forcing using the GISS ModelE2. 1 contribution to CMIP6.” Journal of Advances in Modeling Earth Systems 12, no. 8 (2020): e2019MS001978.

Behrens, Jonathan R., and Bhavya Lal. “Exploring trends in the global small satellite ecosystem.” New Space 7, no. 3 (2019): 126-136.

Beyer, Ross A., Oleg Alexandrov, and Scott McMichael. “The Ames Stereo Pipeline: NASA’s open source software for deriving and processing terrain data.” Earth and Space Science 5, no. 9 (2018): 537-548.

Blackwell, William J., S. Braun, R. Bennartz, C. Velden, M. DeMaria, R. Atlas, J. Dunion et al. “An overview of the TROPICS NASA earth venture mission.” Quarterly Journal of the Royal Meteorological Society 144 (2018): 16-26.

Borra, Surekha, Rohit Thanki, and Nilanjan Dey. Satellite image analysis: clustering and classification. Singapore: Springer, 2019.

Bosch, Marc, Kevin Foster, Gordon Christie, Sean Wang, Gregory D. Hager, and Myron Brown. “Semantic stereo for incidental satellite images.” In 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 1524-1532. IEEE, 2019.

Bowler, Tim R. “A new space race.” In Deep Space Commodities, pp. 13-19. Palgrave Macmillan, Cham, 2018.

Bychkov, I. V., G. M. Ruzhnikov, R. K. Fedorov, A. K. Popova, and Y. V. Avramenko. “Classification of Sentinel-2 satellite images of the Baikal Natural Territory.” Computer Optics 46, no. 1 (2022): 90-96.

Canton, Helen. “European Space Agency—ESA.” In The Europa Directory of International Organizations 2021, pp. 549-551. Routledge, 2021.

Cawse-Nicholson, Kerry, Philip A. Townsend, David Schimel, Ali M. Assiri, Pamela L. Blake, Maria Fabrizia Buongiorno, Petya Campbell et al. “NASA’s surface biology and geology designated observable: A perspective on surface imaging algorithms.” Remote Sensing of Environment 257 (2021): 112349.

CENTRES, ESA. “European Space Agency—ESA.” Organization (2020).

Chen, Zhenzhong, Ye Hu, and Yingxue Zhang. “Effects of compression on remote sensing image classification based on fractal analysis.” IEEE Transactions on Geoscience and Remote sensing 57, no. 7 (2019): 4577-4590.

Cheng, Tianze. “Review of novel energetic polymers and binders–high energy propellant ingredients for the new space race.” Designed Monomers and Polymers 22, no. 1 (2019): 54-65.

Duvaux-Béchon, Isabelle. “The European Space Agency (ESA) and the United Nations 2030 SDG Goals.” In Embedding Space in African Society, pp. 223-235. Springer, Cham, 2019.

Ekaterina Kalinicheva et al. “Neutral network autoencoder for change detection in satellite image time series,” IEEE International Conference on Electronics, Circuits and Systems (ICECS), 2018, doi:10.1109/icecs.2018.8617850

Erickson, Andrew S. “Revisiting the US-Soviet space race: Comparing two systems in their competition to land a man on the moon.” Acta Astronautica 148 (2018): 376-384.

Fabian Brand et al. “Rate-distortion optimized learning-based image compression using an adaptive hierarchical autoencoder with conditional hyperprior”, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021.

Fai Yang et al. “Slimmable compressive autoencoders for practical neural image compression”, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021.

Gould, Michael. “Facilitating Small Satellite Enterprise for Emerging Space Actors: Legal Obstacles and Opportunities.” In Legal Aspects Around Satellite Constellations, pp. 29-46. Springer, Cham, 2021.

Green, Robert O., Natalie Mahowald, Charlene Ung, David R. Thompson, Lori Bator, Matthew Bennet, Michael Bernas et al. “The Earth surface mineral dust source investigation: An Earth science imaging spectroscopy mission.” In 2020 IEEE Aerospace Conference, pp. 1-15. IEEE, 2020.