Benchmarking Methodology Working Group Z. Lai Internet-Draft H. Li Intended status: Informational Tsinghua University Expires: 11 January 2024 Q. Zhang Zhongguancun Laboratory Q. Wu Y. Deng Tsinghua University 10 July 2023 Considerations for Benchmarking Network Performance in Satellite Internet Constellations draft-lai-bmwg-sic-benchmarking-02 Abstract Entering the "NewSpace" era, satellite Internet constellations (SIC) are scaling up at a fast pace. Emerging satellite networks constructed upon SICs enable great opportunities for ubiquitous and low-latency Internet services globally. It should be useful for satellite service providers to run various laboratory experiments to comprehensively and systematically benchmark the network performance of their new network techniques before launching them to the outer space. However, existing benchmarking methodologies for terrestrial networks either achieve fidelity but lack flexibility or achieve flexibility but lack fidelity. This draft describes our basic considerations as specifications to guide the network performance benchmark for SICs. A satellite network constructed upon emerging SICs in low earth orbit has many unique characteristics as compared to existing terrestrial networks. Specifically, our considerations include multiple networking models of emerging SICs, a data-driven benchmarking approach which may enable testers to build a laboratory benchmark environment with acceptable flexibility and fidelity to support various experiments, critical configuration parameters that might affect the SIC network performance, and several suggested test cases for network performance benchmarking. Status of This Memo This Internet-Draft is submitted in full conformance with the provisions of BCP 78 and BCP 79. Lai, et al. Expires 11 January 2024 [Page 1] Internet-Draft Benchmarking SIC Network Performance July 2023 Internet-Drafts are working documents of the Internet Engineering Task Force (IETF). Note that other groups may also distribute working documents as Internet-Drafts. The list of current Internet- Drafts is at https://datatracker.ietf.org/drafts/current/. Internet-Drafts are draft documents valid for a maximum of six months and may be updated, replaced, or obsoleted by other documents at any time. It is inappropriate to use Internet-Drafts as reference material or to cite them other than as "work in progress." This Internet-Draft will expire on 11 January 2024. Copyright Notice Copyright (c) 2023 IETF Trust and the persons identified as the document authors. All rights reserved. This document is subject to BCP 78 and the IETF Trust's Legal Provisions Relating to IETF Documents (https://trustee.ietf.org/ license-info) in effect on the date of publication of this document. Please review these documents carefully, as they describe your rights and restrictions with respect to this document. Code Components extracted from this document must include Revised BSD License text as described in Section 4.e of the Trust Legal Provisions and are provided without warranty as described in the Revised BSD License. Table of Contents 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 3 2. Notation and Terminology . . . . . . . . . . . . . . . . . . 4 3. SIC Networking Models . . . . . . . . . . . . . . . . . . . . 5 3.1. SIC Components . . . . . . . . . . . . . . . . . . . . . 5 3.2. Networking Models of Emerging SICs . . . . . . . . . . . 6 4. Considerations for SIC Benchmarking Methodology . . . . . . . 9 4.1. LBE Requirements . . . . . . . . . . . . . . . . . . . . 9 4.2. Exploiting A Data-driven Approach for SIC Benchmarking . 10 4.3. Benchmarking Workflow . . . . . . . . . . . . . . . . . . 12 4.4. Benchmarking Scope . . . . . . . . . . . . . . . . . . . 12 5. Considerations for Benchmarking Environment Configuration . . 12 5.1. Terminology and Definition of the Parameters . . . . . . 13 5.1.1. Parameters on Constellation Topology . . . . . . . . 13 5.1.2. Parameters on Ground Station Distribution . . . . . . 13 5.1.3. Parameters on Network Links . . . . . . . . . . . . . 14 5.2. Setting of the Parameters . . . . . . . . . . . . . . . . 14 5.2.1. Constellation Orbital Parameters . . . . . . . . . . 14 5.2.1.1. Regulatory-Data-Driven Orbital Parameters . . . . 14 5.2.1.2. Live-Data-Driven Orbital Parameters . . . . . . . 16 5.2.2. Ground Station Distribution . . . . . . . . . . . . . 16 Lai, et al. Expires 11 January 2024 [Page 2] Internet-Draft Benchmarking SIC Network Performance July 2023 5.2.3. Connectivity Pattern . . . . . . . . . . . . . . . . 16 5.2.3.1. Crowd-Sourcing-Driven Connectivity Pattern . . . 17 5.2.3.2. Strategy-based Connectivity Pattern . . . . . . . 17 5.2.4. Network Link . . . . . . . . . . . . . . . . . . . . 17 6. Considerations for SIC Test Cases . . . . . . . . . . . . . . 18 6.1. Benchmarking Routing Protocols in an SIC . . . . . . . . 18 6.2. Benchmarking Transport Protocols in an SIC . . . . . . . 19 7. Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . 19 8. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . 19 9. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 19 10. Security Considerations . . . . . . . . . . . . . . . . . . . 20 11. References . . . . . . . . . . . . . . . . . . . . . . . . . 20 11.1. Normative References . . . . . . . . . . . . . . . . . . 20 11.2. Informative References . . . . . . . . . . . . . . . . . 21 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 22 1. Introduction In the past few years, thanks to the innovative technologies emerged from the aerospace industry, we have witnessed the rapid evolution and deployment of satellite Internet constellations (SIC) in low earth orbit (LEO). These SICs, such as SpaceX's Starlink, OneWeb and Amazon's Kuiper project, are actively deploying hundreds to thousands of broadband LEO satellites in the outer space, and they promise to realize pervasive, high-throughput and low-latency Internet services for terrestrial users globally [Latency-analysis][Ground-relays][SpaceRTC]. Network performance, which is typically affected by many practical factors such as the concrete implementation of network protocols and hardware capabilities, is very critical for satellite Internet service providers (SISP). Therefore, it should be important for SISPs to conduct laboratory characterization to benchmark and understand the network performance of their dedicated implementations of new network techniques before deploying them into the outer space. For example, a SISP may need to comprehensively and systematically assess the network performance of a new address allocation mechanism or a new routing policy in an experimental environment before the launch, and understand how well will these new techniques perform on existing SIC architecture in advance. Ideally, a laboratory benchmark environment (LBE) is expected to simultaneously accomplish fidelity and flexibility. However, existing benchmarking methodologies for terrestrial networks are insufficient to create a desired LBE for SICs due to several unique characteristics of SICs. First, due to the expensive manufacturing and launch cost, constructing an experimental satellite network using a number of real satellites should be technically and economically Lai, et al. Expires 11 January 2024 [Page 3] Internet-Draft Benchmarking SIC Network Performance July 2023 difficult. Second, benchmarking network performance of SICs via numerical or discrete-event-based simulation [Hypatia][StarPerf] is fidelity-limited. Although network simulators can flexibly simulate satellite dynamics and constellation topology variation, they have limited capability to support the run of real system codes and network functions as in a real deployment. The abstraction-level of simulators might be too high to capture system-level effects as in real systems, such as power consumption and software overhead under heavy workloads. Finally, while network emulations [NIST-Net][VT-Mininet] can create virtual LBEs by integrating a number of virtual machines or containers to support the benchmark of real implementations of network protocols and functions, existing emulators are not constellation-consistent, because they inherently lack the ability of mimicking constellation-wide LEO dynamics and corresponding time-varying network behaviors as in a real SIC. This draft aims to provide basic considerations as specifications to guide network performance benchmark for SICs. Since an LEO satellite network constructed upon SICs has many unique characteristics as compared to existing terrestrial networks, our considerations in this draft include: (1) multiple networking models of emerging SICs; (2) a data-driven benchmarking approach that enables testers to build a LBE with acceptable flexibility and fidelity to support various test cases; (3) critical configuration parameters that might affect the SIC network performance; and (4) suggested test cases for SIC network performance benchmarking. 2. Notation and Terminology The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described in RFC 2119 [RFC2119]. This document uses the following acronyms and terminologies: SIC: Satellite Internet Constellation LEO: Low Earth Orbit SISP: Satellite Internet Service Provider LBE: Laboratory Benchmark Environment OSPF: Open Shortest Path First [RFC2328] TCP: Transmission Control Protocol [RFC0793] QUIC: Quick UDP Internet Connections [RFC9000] Lai, et al. Expires 11 January 2024 [Page 4] Internet-Draft Benchmarking SIC Network Performance July 2023 SRLA: Satellite Relays for Last-mile Accessibility SRGS: Satellite Relays for Ground Station Networks GSSN: Ground Station Gateway for Satellite Networks DASN: Directly Accessed Satellite Networks GS: Ground Station SHF: Super High Frequency EHF: Extremely High Frequency GSaaS: Ground-Stations-as-a-Service VSAT: Very Small Aperture Terminal ISL: Inter-Satellite Link GSL: Ground-Satellite Link LoS: Line-of-Sight DUT: Device Under Test SUT: System Under Test 3. SIC Networking Models 3.1. SIC Components In particular, an emerging SIC typically includes a large number of low-flying broadband satellites, and geographically distributed ground facilities such as ground stations and user terminals (e.g. satellite dish). LEO broadband satellites relay and amplify radio telecommunication signals via transponders. These satellites can be equipped with high-speed radio and laser links [ISL-links], and thus promise to enable high-throughput inter-satellite and ground-satellite communication. To achieve low communication latency, emerging broadband satellites are operated in LEO to reduce the propagation latency. For example, the first phase of SpaceX's Starlink constellation is operated at about 550km altitude. As of September 2022, Starlink has already deployed more than 3000 mass-produced satellites with Ka-/Ku-/E-band phased array antennas and laser transponders (in some latest satellites). Lai, et al. Expires 11 January 2024 [Page 5] Internet-Draft Benchmarking SIC Network Performance July 2023 Ground stations are terrestrial radio stations designed for telecommunication with satellites. Typically, they are deployed on the earth surface, and communicate with satellites by transmitting and receiving radio telecommunication signals in the super high frequency (SHF) or extremely high frequency (EHF) bands. If a ground station successfully exchanges radio waves to an LEO satellite, it then establishes a telecommunication connectivity. Satellite Internet service providers often operate a large number of geo- distributed ground stations to control and coordinate their satellites. More recently, the world's leading cloud providers such as Amazon and Microsoft are actively deploying their Ground-Stations- as-a-Service (GSaaS) platforms [Amazon-GS][Microsoft-GS], allowing satellite operators to use ground services on a flexible "pay-as-you- go" basis with affordable costs, and without the need to deploy their own ground infrastructures. User terminals, or very small aperture terminals (VSAT), satellite dishes, can be thought of as a special kind of small ground stations designed for connecting terrestrial users and satellites. In some practical SICs like the current form of Starlink, terrestrial users connect their handsets to broadband satellites via a signal conversion process performed by a dish-like terminal in the middle. 3.2. Networking Models of Emerging SICs At a high level, an LEO satellite network built upon SICs can be described as a dynamic graph, where each node presents a satellite, a ground station or a user terminal. A link connecting two ends in the graph refers to an inter-satellite link (ISL) or a ground-satellite link (GSL) in practice. The state of a link (i.e. active or inactive) might change over time, due to the dynamics of satellites and changes of inter-visibility. In practice, the concrete networking model, which describes how different components in an SIC are inter-connected to construct the network, could be different depending on the concrete SIC architecture and deployment. Based on the status quo of real-world commercial SICs and the latest academic literatures, we consider four representative SIC networking models for network performance benchmarking. (1) Satellite relays for last-mile accessibility (SRLA). Satellites and ground facilities can be integrated based on the classic "bent- pipe" architecture without the support of ISLs. In this model, satellites are used as relays to provide last-mile accessibility for terrestrial users. Specifically, user traffic from ground are first transmitted to the satellite, which then sends it right back down again like a bent pipe. This networking model is currently used by Lai, et al. Expires 11 January 2024 [Page 6] Internet-Draft Benchmarking SIC Network Performance July 2023 many ISIPs such as OneWeb. Figure 1 plots an example illustrating how two terrestrial users communicate with each other. During an end-to-end session, packets from the sender are first forwarded to a sender-side ground station, then to a receiver-side ground station through terrestrial Internet, and finally to the receiver by another satellite. +---------+ +---------+ +---------+ +---------+ |Satellite| |Satellite| |Satellite| |Satellite| +----+----+ +-----+---+ +----+----+ +----+----+ / \ / \ / \ no ISL support / \ / \ / \ +----+----+ +----+----+ ------------- +----+----+ +----+----+ | User | | Ground | |Terrestrial| | Ground | | User | | Terminal| | Station |<-->| Internet |<-->| Station | | Terminal| +---------+ +---------+ ------------- +---------+ +---------+ sender receiver Figure 1: SRLA: satellite relays for last-mile accessibility. (2) Satellite relays for ground station networks (SRGS) [Ground-relays]. Figure 2 depicts another "bent-pipe"-based inter- networking paradigm, where geo-distributed ground stations work as routers to construct a Layer-3 network. The only processing performed by satellites is to switch packets between two connected ground facilities. Note that in this networking model no satellites are equipped with ISLs. In a end-to-end communication session, packets from the sender is routed to the receiver by routes over satellites and ground stations. +---------+ +---------+ +---------+ +---------+ |Satellite| |Satellite| |Satellite| |Satellite| +----+----+ +-----+---+ +----+----+ +----+----+ / \ / \ no ISL / \ / \ / \ / \ / \ / \ / \ / \ / \ / \ +----+----+ +----+----+ +----+----+ +----+----+ +----+----+ | User | | Ground | | Ground | | Ground | | User | | Terminal| | Station | | Station | | Station | | Terminal| +---------+ +---------+ +----+----+ +---------+ +---------+ sender receiver Figure 2: SRGS: satellite relays for ground station networks. (3) Ground station gateway for satellite networks (GSSN) [Internet-backbone]. Figure 3 shows another inter-networking approach based on ISLs. Leveraging ISLs, LEO satellites can build Lai, et al. Expires 11 January 2024 [Page 7] Internet-Draft Benchmarking SIC Network Performance July 2023 space routes to forward Internet traffic for long-haul communication, without the need of a large number of ground station relays. Ground stations work as an access point or a gateway for users. Satellites and ground stations jointly build a Layer-3 network for wide-area communication. During an end-to-end transmission, packets from the sender are first routed to a ground station via terrestrial networks, then to the receiver side ground station over satellite paths constructed by ISLs, and finally to the receiver by terrestrial network again. With inter-satellite communication enabled by ISLs, this networking model may require less ground stations as compared to SRLA and SRGS. ISLs +---------+ +---------+ +---------+ ISLs -------|Satellite|------|Satellite|--------|Satellite|----------- +----+----+ +-----+---+ +----+----+ / \ / \ / \ +----+----+ +----+----+ +----+----+ +----+----+ | User | Terrestrial | Ground | | Ground | Terrestrial | User | | Terminal|<----------->| Station | | Station |<----------->| Terminal| +---------+ Internet +---------+ +---------+ Internet +---------+ sender receiver Figure 3: GSSN: ground station access for satellite networks. (4) Directly accessed satellite networks (DASN) [Ground-relays][DDos-user-terminal]. Figure 4 plots another networking model where users install satellite terminals to directly access the satellite networks with ISL deployments, and can enable long-haul communication without the assistance of geo-distributed ground stations. In this model, satellite routers run dedicated space routing protocols to calculate their routing tables, and forward traffic from/to terrestrial users directly. Each satellite may also perform other network functions more than just routing, such as host configurations (e.g. IP, DNS allocation) for terrestrial user terminals. Lai, et al. Expires 11 January 2024 [Page 8] Internet-Draft Benchmarking SIC Network Performance July 2023 ISLs +---------+ +---------+ +---------+ ISLs -------------|Satellite|--------|Satellite|----------|Satellite|----------- +----+----+ +-----+---+ +----+----+ / \ / \ / \ +----+----+ +----+----+ | User | | User | | Terminal| | Terminal| +---------+ +---------+ sender receiver Figure 4: DASN: satellite networks directly accessed by terrestrial users. 4. Considerations for SIC Benchmarking Methodology 4.1. LBE Requirements Ideally, a LBE built for benchmarking SIC network performance is expected to simultaneously accomplish acceptable realism, flexibility and cost. We summarize four baseline requirements as follows. (1) Constellation characteristics. The LBE is expected to mimic spatial and temporal constellation-wide characteristics of real mega- constellations. For example, the LBE is expected to be able to simulate/emulate network nodes at the same scale of a real mega- constellation, and can characterize the high dynamicity of LEO satellites, as well as its corresponding impact on network behaviors over time. (2) Network-level realism. The LBE is expected to support the run of real system codes and deploy the similar functionality like in a real system and networking stack. (3) Flexibility. As of the date of this writing, emerging mega- constellations are evolving rapidly, and many of them plan to launch hundreds to thousands more LEO satellites. Since a SISP's operating constellations might update frequently, the LBE is expected to flexibly support various network topologies at scale and load various network functions to meet various benchmarking requirements. (4) Usability. Finally, as we target at a laboratory-level benchmarking methodology, it is also expected that the LBE could be controllable, low-cost, and can provide easy-to-use programmable interfaces for testers to support diverse benchmarking requirements. Lai, et al. Expires 11 January 2024 [Page 9] Internet-Draft Benchmarking SIC Network Performance July 2023 4.2. Exploiting A Data-driven Approach for SIC Benchmarking We consider a data-driven approach for creating a LBE that can satisfy the above requirements on benchmarking network performance of SICs. Our consideration is inspired by an important observation obtained from the current satellite Internet ecosystem: many organizations (e.g., regulators and satellite operators) and end users have shared a collection of public data to the community, including constellation regulatory information, orbital data observed from realistic satellites, ground station distributions and network capacities measured from terrestrial user terminals, etc. Based on this important fact, we consider to create a LBE for SIC benchmarking by judiciously combining real data trace, model-based orbit and network analysis, and large-scale network system emulation, to construct a real-data-driven digital twin, i.e., a virtual presentation synchronized to a real physical SIC in terrestrial environments for SIC benchmarking. In particular, the considered benchmarking approach can be summarized as follows. First, leveraging a crowd-sourcing approach to collect, combine and explore realistic constellation-relevant information to calculate the spatial and temporal characteristics consistent to real mega-constellations. Second, driven by such realistic information, exploiting a large number of networked virtual nodes and links to flexibly emulate a customized laboratory environment, which characterizes system-level effects and network behaviors consistent to a real SIC. Figure 5 depicts the overview of the considered data-driven approach for benchmarking network performance of SICs. The benchmarking environment consists four major components as follows. Lai, et al. Expires 11 January 2024 [Page 10] Internet-Draft Benchmarking SIC Network Performance July 2023 +-----------------------+ | Constellation-relevant| | Information Collector | +-----------------------+ | v +----------------------------+ | +----+----+----+----+----+ | | | Virtual SIC Environment| | +-----+ | | (emulated satellites | | | | interactive | | and ground stations) | | | DUT |<-------------->| +----+----+----+----+----+ | |/SUT | traffic | | +-----+ | Satellite Network | | Emulator | +----------------------------+ ^ | +-----------------------+ | Traffic Generator | +-----------------------+ Figure 5: A data-driven approach for benchmarking the network performance of SICs. (1) A constellation-relevant information collector, which collects public constellation information and ground station distributions etc., from the satellite ecosystem. It maintains the key real-world information to support, guide and drive the construction of SIC benchmarking environments for various benchmarking requirements. (2) A satellite network emulator, which can calculate the spatial and temporal characteristics of a specific SIC, and further create a virtual SIC environment. It exploits VM- or container-based emulation to flexibly construct the virtual network environment based on concrete benchmarking requirements, and mimics satellite dynamics as well as the impact on network conditions (e.g. propagation latency change, connectivity loss and re-establishment). (3) A device under test (DUT) or system under test (SUT) which contains or runs the concrete implementation required for testing, and can connect to the virtual SIC environment to load interactive traffic. The DUT/SUT, together with the satellite network emulator, collaboratively construct the benchmarking environment. For example, in practice, the DUT/SUT can be a satellite hardware prototype running a tailored space routing mechanism required for testing. Lai, et al. Expires 11 January 2024 [Page 11] Internet-Draft Benchmarking SIC Network Performance July 2023 (4) A traffic generator that generates network traffic to drive the network performance benchmarking. 4.3. Benchmarking Workflow (1) Experiment preparation. A tester first prepares the concrete implementation for test, e.g. a new satellite routing program, or a new transport protocol implementation tailored for satellite Internet. (2) Benchmarking environment creation. Then the tester defines a network topology, i.e. a graph in which edges represent network links and nodes represent satellites, ground stations or end-hosts, and then create the SIC benchmarking environment. (3) DUT/SUT Deployment. Once the benchmark environment is constructed, in the deployment phase, the tester loads the implementation for testing on corresponding nodes in the environment. For example, if a tester needs to benchmark a new distributed routing program, then the routing implementation should be loaded on each emulated satellite in the virtual environment, and the DUT/SUT. Then the DUT/SUT is connected to the virtual environment. (4) Run test cases. Finally, run the dedicated test cases on the experimental network under specific application traffic. Performance results (e.g. latency, throughput, and route convergence time) can be measured for further in-depth analysis. 4.4. Benchmarking Scope The considered benchmarking approach mainly targets at benchmarking the network performance of a dedicated network technique as well as its system effects at various layers of the Internet protocol stack in an SIC. For example, evaluating a new routing/transport-layer protocol, or assessing the network performance of a new topology design in a highly-dynamic, resource constrained virtual SIC environment. The scale of the benchmark experiment supported by the considered approach is closely related to the underlying resources provided by underlying physical machines which are used to create the LBE. 5. Considerations for Benchmarking Environment Configuration Next we discuss the considerations for multiple configuration parameters of the benchmarking environment, which might be closely related to the benchmarking results. Lai, et al. Expires 11 January 2024 [Page 12] Internet-Draft Benchmarking SIC Network Performance July 2023 5.1. Terminology and Definition of the Parameters 5.1.1. Parameters on Constellation Topology The topology of a constellation is jointly determined by many constellation-relevant parameters, including the orbit inclination, altitude, number of orbits, number of satellites in different orbits, connectivity pattern for inter-satellite and ground-satellite communication, number of ISLs in each satellite, etc. Inclination is the angle between an orbit and the Equator as the satellite moves. Typically, the value of inclination for polar orbits is about 90 degree. Altitude is a value measured over sea level and this value determines the orbital velocity of a satellite. Emerging SICs consist of low-flying satellites with altitude less than 2000km to enable low communication latency. The above orbital parameters, together with the number of orbits and the number of satellites, jointly affect the coverage of the satellite constellation. Connectivity pattern indicates how satellites should inter-connect to each other, and how satellites should connect to visible ground stations. There are two classic ISL connectivity patterns. +Grid [Space-ISL] suggests that each satellite connects to two adjacent satellites in the same orbit, and to other two satellites in adjacent orbits. Motif [Motif] is a repetitive pattern where each satellite connects to multiple visible satellites and each satellite's local view is the same as that of any other. 5.1.2. Parameters on Ground Station Distribution There are three primary parameters related to ground stations, which might affect the benchmarking results. First, the geographical locations, which include latitude and longitude of ground stations. Second, the number of available antennas for space-ground communication. This value can affect the number of satellites that can be simultaneously connected by the ground station. Third, the minimum elevation angle, which determines the line-of-sight (LoS) of the ground station and can affect the available duration of space- ground communication. Lai, et al. Expires 11 January 2024 [Page 13] Internet-Draft Benchmarking SIC Network Performance July 2023 5.1.3. Parameters on Network Links The total capacity of satellite communication systems has increased significantly over the past decade. Emerging broadband satellites can be equipped with high-speed radio or laser communication links. Link capacity is a critical parameter that can significantly affect the constellation-wide network performance of an SIC. Regarding the ground-to-satellite link capacity, during the beta test of Starlink, end users can achieve data speeds varying from 50Mbps (uplink) to 150Mbps (downlink) in most available locations. In addition, many planned constellations also suggest the use of laser inter-satellite links, which can potentially support up to tens or even hundreds of Gbps data transmission rate for inter-satellite communication [Bandwidth]. To reasonably benchmark the network performance of an SIC, a tester can configure the link capacity in the benchmark environment based on the concrete assessment requirements. 5.2. Setting of the Parameters We discuss different data-driven parameter settings based on best practices. 5.2.1. Constellation Orbital Parameters Two ways are used in practice, namely Regulatory-Data-Driven and Live-Data-Driven. Regulatory-Data-Driven Orbital Parameters SHOULD be tested and Live-Data-Driven Orbital Parameters are RECOMMENDED. 5.2.1.1. Regulatory-Data-Driven Orbital Parameters Orbital parameters of the constellations are reviewed and publicly disclosed by regulatory agencies (eg. FCC, ITU, etc.) and should be followed by the operators in principle, thus representing the ideal situation of the constellations. Both Polar-orbit and Inclined-orbit constellations SHOULD be tested. If the DUT/SUT is designed with orbital preferences, the preferences MUST be stated in the report. The table below provides the orbital parameters of the state-of-the- art networking constellations from regulatory agencies. Lai, et al. Expires 11 January 2024 [Page 14] Internet-Draft Benchmarking SIC Network Performance July 2023 +==========+==========+=============+======+============+==========+ | Name and | Altitude | Inclination | # of | # of | Polar / | | Shell | (km) | (degree) |orbits| satellites | Inclined | | | | | | per orbit | | +==========+==========+=============+======+============+==========+ | Starlink | 550 | 53 | 72 | 22 | Inclined | | S1 | | | | | | +----------+----------+-------------+------+------------+----------+ | Starlink | 540 | 53.2 | 72 | 22 | Inclined | | S2 | | | | | | +----------+----------+-------------+------+------------+----------+ | Starlink | 570 | 70 | 36 | 20 | Inclined | | S3 | | | | | | +----------+----------+-------------+------+------------+----------+ | Starlink | 560 | 97.6 | 6 | 58 | Polar | | S4 | | | | | | +----------+----------+-------------+------+------------+----------+ | Starlink | 560 | 97.6 | 4 | 43 | Polar | | S5 | | | | | | +----------+----------+-------------+------+------------+----------+ | Kuiper | 630 | 51.9 | 34 | 34 | Inclined | | K1 | | | | | | +----------+----------+-------------+------+------------+----------+ | Kuiper | 610 | 42 | 36 | 36 | Inclined | | K2 | | | | | | +----------+----------+-------------+------+------------+----------+ | Kuiper | 590 | 33 | 28 | 28 | Inclined | | K3 | | | | | | +----------+----------+-------------+------+------------+----------+ | Telesat | 1015 | 98.98 | 27 | 13 | Polar | | T1 | | | | | | +----------+----------+-------------+------+------------+----------+ | Telesat | 1325 | 50.88 | 40 | 33 | Inclined | | T2 | | | | | | +----------+----------+-------------+------+------------+----------+ | OneWeb | 1200 | 87.9 | 12 | 49 | Polar | | O1 | | | | | | +----------+----------+-------------+------+------------+----------+ | OneWeb | 1200 | 55 | 8 | 16 | Inclined | | O2 | | | | | | +----------+----------+-------------+------+------------+----------+ Table 1: Regulatory Data on Orbital Parameters of SoA Networking Constellations. Lai, et al. Expires 11 January 2024 [Page 15] Internet-Draft Benchmarking SIC Network Performance July 2023 5.2.1.2. Live-Data-Driven Orbital Parameters Orbital Parameters can also be set based on live constellation GP data (general perturbations orbital data, also known for TLE) from CelesTrak.org [CelesTrak]. The GP data is produced by fitting observations (radar and optical) from US Space Surveillance Network (SSN) and provided continuously, thus representing the live situation of the constellations. Among GP and SupGP which are both provided, SupGP data is RECOMMENDED, as SupGP (Supplemental GP) is derived directly from owner/operator-supplied orbital data, providing reduced latency and improved accuracy comparing with GP. The Max Age of GP or SupGP SHALL be less than 1 day and MUST be less than 5 days. Comparing to Regulatory-Data, Live-Data is more accurate (in terms of per-satellite position), and also easy-to-get. However, Live-Data requires extra orbital determination process (implying inter- satellite relationship) to support network experiments. Once the orbital determination process is standardized, Live-Data-Driven Orbital Parameters shall SHOULD be used to benchmark. 5.2.2. Ground Station Distribution It's RECOMMENDED to set GS distribution based on Crowd-Sourcing-Data, which is often refined by fans community based on Regulatory-Data. For example, one crowd-sourcing global distribution of Starlink GSes could be found here [Crowd-sourcing], featuring details like the number of antennas and construction/opearation state of each GS. What's more, the data could be downloaded in KML format and feed into the banchmarking environment. Other OPTIONAL data for ground station distribution include Amazon AWS GS [Amazon-GS], Microsoft Azure Orbital GS [Microsoft-GS], and SatNOGS [SatNOGS], an open source global network of satellite ground- stations. 5.2.3. Connectivity Pattern Some of the connectivity patterns could be explored in live network and are RECOMMENDED to setup based on crowd-sourcing data. For other connectivity patterns, some RECOMMENDED strategies are also discussed in this section. Lai, et al. Expires 11 January 2024 [Page 16] Internet-Draft Benchmarking SIC Network Performance July 2023 5.2.3.1. Crowd-Sourcing-Driven Connectivity Pattern It's RECOMMENDED to setup connectivity pattern based on crowd- sourcing data, if available crowd-sourcing data exists. For example, inter-ground station connectivity of Starlink ground stations is explored by the fans community [Crowd-sourcing], where the real users perform traceroute from all over the world and gather the results together. The data is also downloadable. 5.2.3.2. Strategy-based Connectivity Pattern For inter-satellite connectivity, "+Grid" strategy [Space-ISL] is widely-adopted and RECOMMENDED, where the satellites are connected with 4 neighbors and form a massive grid across the constellation. Other OPTIONAL inter-satellite connectivity strategies include "Inner-orbit Only" and "Motif" [Motif]. For ground-to-satellite connectivity, "Nearest Ground Station with Antenna Quota" is intuitive and RECOMMENDED, where each ground station with 8 antenna quota is RECOMMENDED if there doesn't exist more specific data. Other factors affecting ground-to-satellite connectivity strategies in real-world systems include (1) angle of elevation, (2) azimuth, (3) satellite launch dates, and (4) whether one satellite is sunlit [scheduling]. These factors constitute a more complete strategy and are OPTIONAL if the data of these factors are available. Specifically, for a specific ground station or user terminal, a satellite with (1) a higher angle of elevation, (2) a azimuth that could avoid interference with geostationary orbit satellites, (3) newer launch dates, and (4) a solar panel being sunlit is preferred. 5.2.4. Network Link For more traditional network link setup, strategy-based setup is RECOMMENDED. For example, the propagation latency of ground- satellite links (RF) and inter-satellite links (free-space optical) could be derived from distance and light speed. The capacity of ground-satellite links is RECOMMENDED to be set as 1 to 5 Gbps. The specific value MAY be derived from frequency band info from regulatory data. The capacity of inter-satellite links is RECOMMENDED to be set as 5 to 20 Gbps [ISL-bandwidth]. The packet loss ratio of ground-satellite links is RECOMMENDED to be set dynamically between 0 and 5%, where higher loss ratios occur when one ground-satellite link handover event occurs [IMC-2022]. Lai, et al. Expires 11 January 2024 [Page 17] Internet-Draft Benchmarking SIC Network Performance July 2023 Although measurement data on path latency and bandwidth from real satellite users [Starlink-status] are relative to network link setup, we didn’t find a good way to use directly. They may help on determining the coefficient when calculating link latency based on distance. 6. Considerations for SIC Test Cases In this section, we consider several test cases that can be used for benchmarking SIC network performance. 6.1. Benchmarking Routing Protocols in an SIC Network routing plays an important role in guaranteeing good service quality of SICs, since it not only determines the reachability between any two communication ends in the network, but also affects the achievable network performance perceived by customers. Ideally, an SIC routing mechanism is expected to simultaneously maintain high routing reachability for geo-distributed customers during the operation period, and provide low latency and high throughput paths for delivering various Internet traffic over the SIC. Therefore, it should be very important for satellite Internet service providers to benchmark how well will a routing protocol (and its implementation) perform in their SIC environment. Objective: given an implementation of the routing protocol for testing (e.g. OSPF [RFC2328], BGP [RFC4271] or their variations optimized for space environments), this test case measures its network performance under a specific SIC configuration (e.g. the current form of the first phase of Starlink constellation which includes 1584 LEO satellites). Procedure: create an SIC network topology consisting of 1583 virtual satellites and a real DUT/SUT to emulate the satellite network. In addition, create two virtual user terminals in the virtual environment to emulate the source and destination in a communication session. Deploy the implementation for testing in each emulated satellite and the DUT/SUT. Run the tested routing implementation, and load traffic in the benchmarking environment to start the test. Measurement: since LEO satellites move in their orbits, the entire network topology should change over time. This test case measures the routing convergence time and the routing reachability under LEO dynamics. Lai, et al. Expires 11 January 2024 [Page 18] Internet-Draft Benchmarking SIC Network Performance July 2023 6.2. Benchmarking Transport Protocols in an SIC Internet transport protocols, such as TCP and QUIC are expected to function correctly over any kinds of network paths. For satellite operators, it should be important to understand the network performance of transport protocols in an SIC network path. Note that the unique characteristics of SIC may impact network performance when using existing standard mechanisms. For example, in an SIC network, end-to-end latency might change due to the fluctuation of network paths caused by LEO high dynamics. Such a non-congestion latency increase might trigger cwnd shrinking for delay-based congestion control mechanisms such as TCP Reno. Objective: given an implementation of a transport protocol (e.g. TCP, QUIC or their variations optimized for satellite networks), measure its network performance under a specific SIC configuration. Procedure: create an SIC network topology consisting of 1583 virtual satellites and a real DUE device to emulate the satellite network. In addition, use the DUT/SUT as the source (e.g a TCP sender), and create one virtual user terminal in the virtual environment to emulate the destination (e.g. TCP receiver) in a communication session. Load traffic in the DUT/SUT to start the test. Measurement: This test case measures the performance of the tested transport protocol, such as end-to-end latency, jitter and throughput achieved in the transport layer. 7. Conclusion In this draft, we make several considerations as specifications for SIC network performance benchmarking. We describe multiple networking models of emerging SICs, a data-driven benchmarking approach which may enable testers to flexibly build a laboratory benchmark environment to support various test cases, critical configuration parameters that might affect the SIC network performance, and several suggested test cases for SIC benchmarking. 8. Acknowledgements 9. IANA Considerations This memo includes no request to IANA. Lai, et al. Expires 11 January 2024 [Page 19] Internet-Draft Benchmarking SIC Network Performance July 2023 10. Security Considerations Benchmarking activities as described in this memo are limited to technology characterization using controlled devices in a laboratory environment, with dedicated address space and the constraints specified in the sections above. The benchmarking network topology as well as its parameter configurations will be an independent test setup, and the laboratory environment MUST NOT be connected to devices that may forward the test traffic into a production network, or misroute traffic to the test management network. In addition, benchmarking is performed on a "black-box" basis, relying solely on measurements observable external to the DUT/SUT. Special capabilities SHOULD NOT exist in the DUT/SUT specifically for benchmarking purposes. Any implications for network security arising from the DUT/SUT SHOULD be identical in the lab and in production networks. 11. References 11.1. Normative References [RFC0793] Postel, J., "Transmission Control Protocol", RFC 7930, DOI 10.17487/RFC0793, September 1981, . [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate Requirement Levels", BCP 14, RFC 2119, DOI 10.17487/RFC2119, March 1997, . [RFC2328] Moy, J., "OSPF Version 2", STD 54, RFC 2328, DOI 10.17487/RFC2328, April 1998, . [RFC4271] Rekhter, Y., Li, T., and S. Hares, "A Border Gateway Protocol 4 (BGP-4)", RFC 4271, DOI 10.17487/RFC4271, January 2006, . [RFC6582] Henderson, T., Floyd, S., Gurtov, A., and Y. Nishida, "The NewReno Modification to TCP's Fast Recovery Algorithm", RFC 6582, DOI 10.17487/RFC6582, April 2012, . [RFC9000] Iyengar, J., Ed. and M. Thomson, Ed., "QUIC: A UDP-Based Multiplexed and Secure Transport", RFC 9000, DOI 10.17487/RFC9000, May 2021, . Lai, et al. Expires 11 January 2024 [Page 20] Internet-Draft Benchmarking SIC Network Performance July 2023 11.2. Informative References [Amazon-GS] "Amazon-GS", . [Bandwidth] "Laser Intersatellite Links in a Starlink Constellation: A Classification and Analysis.", 2021, . [CelesTrak] "CelesTrak", . [Crowd-sourcing] "Crowd-Sourcing Starlink Ground Station Distribution", . [DDos-user-terminal] "ICARUS: Attacking low Earth orbit satellite networks.", 2021, . [Ground-relays] "Using ground relays for low-latency wide-area routing in megaconstellations.", 2019, . [Hypatia] "Exploring the "Internet from space" with Hypatia.", 2020, . [IMC-2022] "A Browser-side View of Starlink Connectivity.", 2022, . [Internet-backbone] "Internet backbones in space.", 2020, . [ISL-bandwidth] "ICARUS: Attacking low Earth orbit satellite networks.", . Lai, et al. Expires 11 January 2024 [Page 21] Internet-Draft Benchmarking SIC Network Performance July 2023 [ISL-links] "A Distributed and Hybrid Ground Station Network for Low Earth Orbit Satellites.", 2020, . [Latency-analysis] "Delay is Not an Option: Low Latency Routing in Space.", 2018, . [Microsoft-GS] "Microsoft-GS", . [Motif] "Network topology design at 27,000 km/hour.", 2019, . [NIST-Net] "NIST Net: a Linux-based network emulation tool.", 2003, . [SatNOGS] "SatNOGS Network", . [scheduling] "Making Sense of Constellations: Methodologies for Understanding Starlink's Scheduling Algorithms.", 2023, . [Space-ISL] ""Internet from Space" without Inter-satellite Links.", 2020, . [SpaceRTC] "SpaceRTC: Unleashing the Low-latency Potential of Mega- constellations for Real-Time Communications.", 2022, . [Starlink-status] "Starlink Status", . [StarPerf] "StarPerf: Characterizing Network Performance for Emerging Mega-Constellations.", 2020, . [VT-Mininet] "VT-Mininet: Virtual-time-enabled Mininet for Scalable and Accurate Software-Define Network Emulation.", 2015, . Authors' Addresses Lai, et al. Expires 11 January 2024 [Page 22] Internet-Draft Benchmarking SIC Network Performance July 2023 Zeqi Lai Tsinghua University 30 ShuangQing Ave Beijing 100089 China Email: zeqilai@tsinghua.edu.cn Hewu Li Tsinghua University 30 ShuangQing Ave Beijing 100089 China Email: lihewu@cernet.edu.cn Qi Zhang Zhongguancun Laboratory Beijing China Email: zhangqi@zgclab.edu.cn Qian Wu Tsinghua University 30 ShuangQing Ave Beijing 100089 China Email: wuqian@cernet.edu.cn Yangtao Deng Tsinghua University 30 ShuangQing Ave Beijing 100089 China Email: dengyt21@mails.tsinghua.edu.cn Lai, et al. Expires 11 January 2024 [Page 23]