I wonder what portion was Fidelity, and what portion was Google. Is Larry mainly giving Google's name to provide brand equity while Fidelity puts in $950 million for example?Also, I suspect this is just opening the door to get things where they are a lot more investible as a subsidiary later. Annnnnd... It's Elon. 3 years from now, he could provide a low interest corporate bond for $10+ billion. If he buys a pile of them himself, people will shrug and figure it's probably safe - assuming the Gigafactory(s) and Tesla are doing well, and the satellite biz and BFR are a lot further along.
Quote from: MikeAtkinson on 01/21/2015 09:55 amAnd the global routing table will have to be updated every few seconds. This is a hard problem.This is BGP's hard problem though, no? It doesn't seem like SpaceX has to solve it.Quote from: MikeAtkinson on 01/21/2015 09:55 amThere is also only limited and bounded queuing at internal nodes, the small tag and limited queuing means that the switching can be performed all in hardware, with software only used to update the tag to route map(s).That describes most routers handling large scale traffic on the internet, no? Software can't do line rate forwarding with 100 gigabit so there is no choice. The way things are scaling seems to push things farther in that direction over time, these days large datacenters are doing L3 (iBGP etc) to the top of rack.Quote from: MikeAtkinson on 01/21/2015 09:55 amThis means that when a packet is tagged at the network it is guaranteed to reach the far end (with quality of service guarantees).Unless otherwise indicated ElonSat seems like a "best effort" system.
And the global routing table will have to be updated every few seconds. This is a hard problem.
There is also only limited and bounded queuing at internal nodes, the small tag and limited queuing means that the switching can be performed all in hardware, with software only used to update the tag to route map(s).
This means that when a packet is tagged at the network it is guaranteed to reach the far end (with quality of service guarantees).
Quote from: MikeAtkinson on 01/21/2015 09:55 amQuote from: ArbitraryConstant on 01/21/2015 08:08 amThe satellites will have to know the global routing table and know which satellites are near which ground stations.And the global routing table will have to be updated every few seconds. This is a hard problem.Nowadays there are several proposal/techniques for handle this and others issues. See for example https://tools.ietf.org/html/rfc5177.IMO, what is (slightly) more concerning, is the depletion of IPv4 address space, but, who knows, maybe this could be the 'killer application' for a real deployement of IPv6.
Quote from: ArbitraryConstant on 01/21/2015 08:08 amThe satellites will have to know the global routing table and know which satellites are near which ground stations.And the global routing table will have to be updated every few seconds. This is a hard problem.
The satellites will have to know the global routing table and know which satellites are near which ground stations.
Much easier to just use an ethernet+vlan transport system. The ground system establish routing protocols, find other routers, when the ground sends a packet, it tells the sat system which router this is intended to (MAC address of the router).
Quote from: macpacheco on 01/21/2015 05:56 pmMuch easier to just use an ethernet+vlan transport system. The ground system establish routing protocols, find other routers, when the ground sends a packet, it tells the sat system which router this is intended to (MAC address of the router). The problem with this is how does it know the MAC address of the router it is intended for. That is at the other end of the satellite network. So you need to distribute these MAC addresses. You also need to distribute the IP to router information. An ethernet like system does not work well on a shared bandwidth uplink where none of the transmitters can hear the others. The only way they will know about collisions is when the far end sends a TCP retransmission request.The way we solved this for WISDOM is to have a fixed frame for the uplink consisting of a large number of cells. Each ground station first requests some bandwidth on shared cells (a collision means that they don't get the bandwidth and have to request it again after some random timeout). The satellite allocates some of the uplink cells to each ground station that has requested it, they can ask for more, or relinquish it depending on their past data traffic uplink profile. These changes of bandwidth come in the cells allocated to the ground station, so there are no further collisions. We used ATM cells with a simple propriety routing protocol, there were lots of beams and few intra-satellite links, almost all the time data would flow from one of the beams to another on the same satellite. Some of the beams were dedicated to links to network interconnect ground stations which had full use of the entire beam. The satellites were basically an ATM switch, with very little software, both the ATM switching and uplink bandwidth allocation could be done entirely in hardware.
Much easier to just use an ethernet+vlan transport system.
The whole forwarding plane must be implemented as FPGA or ASIC in order to handle a hundred Gbps of aggregate bandwidth per sat. Many large Cisco routers are implemented as FPGA, allows lower clock speeds with orders of magnitude more forwarding speed than pure software solutions.PS: I'm a performance guy, that tends to do everything KISS. SpaceX might have more complex ideas.
I'm pretty rusty on this now, but I don't think BGP is a particularly good match for a dynamic satellite network.
Quote from: ArbitraryConstant on 01/21/2015 08:08 amThere's no latency advantage if you just terminate the connection at a local ISP. It seems to me that the satellite network will be the ISP.Quote from: ArbitraryConstant on 01/21/2015 08:08 amThe satellites will have to know the global routing table and know which satellites are near which ground stations.And the global routing table will have to be updated every few seconds. This is a hard problem.In 2000 I worked on an internet routing project for Nortel (part of a team of 7), I did the UI and control software for the demo system. The idea is that at the edge of the network the IP packets are inspected and tagged with a series of tags and sent off to internal nodes. At the internal nodes the first tag are used to do the routing and popped (or swapped for another tag). The routes between nodes are set up by a global entity that manages bandwidth so that there is always enough bandwidth for these routes through the network. This means that when a packet is tagged at the network it is guaranteed to reach the far end (with quality of service guarantees). There is also only limited and bounded queuing at internal nodes, the small tag and limited queuing means that the switching can be performed all in hardware, with software only used to update the tag to route map(s). The difficult bit is then the global ('god') system that allocates bandwidth to paths through the network and tells the edge nodes how to tag packets due to their destinations. This is naively a O(n^3) problem, so doesn't scale well without tricks, for this network n=4025 so scaling is not a problem, just whether it is tractable at a size of 4025. It is easy to see how such an idea can be applied to a satellite network.The variable size of IP packets is a problem, so on a lower level we transported them within ATM cells, and developed a cleaver way of mapping the tags onto the ATM VPI and VCI routing. Some such scheme (perhaps not using ATM but a larger cell) would be ideal for the satellite network.[Before working on this project I worked on WISDOM which was an EU funded project to develop a broadband satellite network with Matra-Marconi Space (now Astrium part of EADS) and various universities and consultancies. We mainly looked at MEO satellites, but also LEO and GEO sats. Nortels interest was in the ground segment, I worked on the general system design, uplink and downlink protocols and network control centre. Another part of Nortel worked on demonstration ground systems (breadboard level) while Astrium did the satellite breadboard. About 1998, Nortel decided not to continue with the project, partly because to take it further would require significant investment, partly because they reckoned that they could make more money by just being a ground network supplier to all the satellite networks that were being proposed at the time. - So it is easy to see my interest in these large satellite systems.]
There's no latency advantage if you just terminate the connection at a local ISP.
Do not count any revenue from NASA on this. Maybe if they manage to snag something for the Mars part that NASA has a request out for.
I wonder what portion was Fidelity, and what portion was Google. Is Larry mainly giving Google's name to provide brand equity while Fidelity puts in $950 million for example?
Quote from: MikeAtkinson on 01/21/2015 09:55 amQuote from: ArbitraryConstant on 01/21/2015 08:08 amThere's no latency advantage if you just terminate the connection at a local ISP. It seems to me that the satellite network will be the ISP.Quote from: ArbitraryConstant on 01/21/2015 08:08 amThe satellites will have to know the global routing table and know which satellites are near which ground stations.And the global routing table will have to be updated every few seconds. This is a hard problem.In 2000 I worked on an internet routing project for Nortel (part of a team of 7), I did the UI and control software for the demo system. The idea is that at the edge of the network the IP packets are inspected and tagged with a series of tags and sent off to internal nodes. At the internal nodes the first tag are used to do the routing and popped (or swapped for another tag). The routes between nodes are set up by a global entity that manages bandwidth so that there is always enough bandwidth for these routes through the network. This means that when a packet is tagged at the network it is guaranteed to reach the far end (with quality of service guarantees). There is also only limited and bounded queuing at internal nodes, the small tag and limited queuing means that the switching can be performed all in hardware, with software only used to update the tag to route map(s). The difficult bit is then the global ('god') system that allocates bandwidth to paths through the network and tells the edge nodes how to tag packets due to their destinations. This is naively a O(n^3) problem, so doesn't scale well without tricks, for this network n=4025 so scaling is not a problem, just whether it is tractable at a size of 4025. It is easy to see how such an idea can be applied to a satellite network.The variable size of IP packets is a problem, so on a lower level we transported them within ATM cells, and developed a cleaver way of mapping the tags onto the ATM VPI and VCI routing. Some such scheme (perhaps not using ATM but a larger cell) would be ideal for the satellite network.[Before working on this project I worked on WISDOM which was an EU funded project to develop a broadband satellite network with Matra-Marconi Space (now Astrium part of EADS) and various universities and consultancies. We mainly looked at MEO satellites, but also LEO and GEO sats. Nortels interest was in the ground segment, I worked on the general system design, uplink and downlink protocols and network control centre. Another part of Nortel worked on demonstration ground systems (breadboard level) while Astrium did the satellite breadboard. About 1998, Nortel decided not to continue with the project, partly because to take it further would require significant investment, partly because they reckoned that they could make more money by just being a ground network supplier to all the satellite networks that were being proposed at the time. - So it is easy to see my interest in these large satellite systems.]Is this such a hard problem? It seems to be more geometry than computer science.Most of the ground stations will be static, or nearly so (a car doesn't move far vs orbital speeds). If you want to talk to an IP address, this can be translated to a lat/long, then calculate the best routing to the least busy sat that's visible to that location. The system needs to maintain a table of IP vs lat/long, but most of these won't change very rapidly.If 4,000 sats each have a direct link to 25 other sats that's 100,000 links. ISTM this state info could be broadcast across the network quite easily - 1mbps reserved on each link would propagate the info in seconds to every other node, I'd think.For faster moving objects, like a jetliner, the network could route to the last known lat/long. If the destination is not in range, that node could then forward the request to each of its neighbouring nodes, several of which should have the plane in range. The nodes which can see the target could coordinate their info to derive a new lat/long, then propagate that to correct the routing tables.But, this assumes dumb ground stations. A jetliner could report its GPS location, heading, and even any future course changes programmed into the autopilot. Once this info is propagated once, it might not need any updates for hours unless the plane deviates for something like bad weather.Each sat has a good idea of the location of any ground stations in range due to the phased array. Several sats together could pinpoint something even more accurately, using a sort of GPS-in-reverse. Any station spoofing its GPS location could be rejected from the network.cheers, Martin
Quote from: meekGee on 01/20/2015 09:42 amQuote from: hrissan on 01/20/2015 09:28 amQuote from: meekGee on 01/19/2015 03:36 pm[...]For lack of any other information, I am assuming 64 planes x 64 satellites, and about 500 kg per each.[...][...]splitting 4025 into multipliers gives the following viable solution: 81x25... 81 satellites in 25 planes, perhaps 1 is spare[...]Yes, of course, that was just to get a quick idea, starting with polar...Everything else stays....I was just about to ask why not 115 x 35, or 161 x 25, or 175 x 23. (Musk did say this number was probably overprecise.)
Quote from: hrissan on 01/20/2015 09:28 amQuote from: meekGee on 01/19/2015 03:36 pm[...]For lack of any other information, I am assuming 64 planes x 64 satellites, and about 500 kg per each.[...][...]splitting 4025 into multipliers gives the following viable solution: 81x25... 81 satellites in 25 planes, perhaps 1 is spare[...]Yes, of course, that was just to get a quick idea, starting with polar...Everything else stays....
Quote from: meekGee on 01/19/2015 03:36 pm[...]For lack of any other information, I am assuming 64 planes x 64 satellites, and about 500 kg per each.[...][...]splitting 4025 into multipliers gives the following viable solution: 81x25... 81 satellites in 25 planes, perhaps 1 is spare[...]
[...]For lack of any other information, I am assuming 64 planes x 64 satellites, and about 500 kg per each.[...]
Quote from: MP99 on 01/21/2015 10:08 pmQuote from: MikeAtkinson on 01/21/2015 09:55 amQuote from: ArbitraryConstant on 01/21/2015 08:08 amThere's no latency advantage if you just terminate the connection at a local ISP. It seems to me that the satellite network will be the ISP.Quote from: ArbitraryConstant on 01/21/2015 08:08 amThe satellites will have to know the global routing table and know which satellites are near which ground stations.And the global routing table will have to be updated every few seconds. This is a hard problem.In 2000 I worked on an internet routing project for Nortel (part of a team of 7), I did the UI and control software for the demo system. The idea is that at the edge of the network the IP packets are inspected and tagged with a series of tags and sent off to internal nodes. At the internal nodes the first tag are used to do the routing and popped (or swapped for another tag). The routes between nodes are set up by a global entity that manages bandwidth so that there is always enough bandwidth for these routes through the network. This means that when a packet is tagged at the network it is guaranteed to reach the far end (with quality of service guarantees). There is also only limited and bounded queuing at internal nodes, the small tag and limited queuing means that the switching can be performed all in hardware, with software only used to update the tag to route map(s). The difficult bit is then the global ('god') system that allocates bandwidth to paths through the network and tells the edge nodes how to tag packets due to their destinations. This is naively a O(n^3) problem, so doesn't scale well without tricks, for this network n=4025 so scaling is not a problem, just whether it is tractable at a size of 4025. It is easy to see how such an idea can be applied to a satellite network.The variable size of IP packets is a problem, so on a lower level we transported them within ATM cells, and developed a cleaver way of mapping the tags onto the ATM VPI and VCI routing. Some such scheme (perhaps not using ATM but a larger cell) would be ideal for the satellite network.[Before working on this project I worked on WISDOM which was an EU funded project to develop a broadband satellite network with Matra-Marconi Space (now Astrium part of EADS) and various universities and consultancies. We mainly looked at MEO satellites, but also LEO and GEO sats. Nortels interest was in the ground segment, I worked on the general system design, uplink and downlink protocols and network control centre. Another part of Nortel worked on demonstration ground systems (breadboard level) while Astrium did the satellite breadboard. About 1998, Nortel decided not to continue with the project, partly because to take it further would require significant investment, partly because they reckoned that they could make more money by just being a ground network supplier to all the satellite networks that were being proposed at the time. - So it is easy to see my interest in these large satellite systems.]Is this such a hard problem? It seems to be more geometry than computer science.Most of the ground stations will be static, or nearly so (a car doesn't move far vs orbital speeds). If you want to talk to an IP address, this can be translated to a lat/long, then calculate the best routing to the least busy sat that's visible to that location. The system needs to maintain a table of IP vs lat/long, but most of these won't change very rapidly.If 4,000 sats each have a direct link to 25 other sats that's 100,000 links. ISTM this state info could be broadcast across the network quite easily - 1mbps reserved on each link would propagate the info in seconds to every other node, I'd think.For faster moving objects, like a jetliner, the network could route to the last known lat/long. If the destination is not in range, that node could then forward the request to each of its neighbouring nodes, several of which should have the plane in range. The nodes which can see the target could coordinate their info to derive a new lat/long, then propagate that to correct the routing tables.But, this assumes dumb ground stations. A jetliner could report its GPS location, heading, and even any future course changes programmed into the autopilot. Once this info is propagated once, it might not need any updates for hours unless the plane deviates for something like bad weather.Each sat has a good idea of the location of any ground stations in range due to the phased array. Several sats together could pinpoint something even more accurately, using a sort of GPS-in-reverse. Any station spoofing its GPS location could be rejected from the network.cheers, MartinThinking about this some more, each sat has a bunch of active ground stations that it's talking to. It will have to do a handoff to another sat as it gets near the station's horizon, but I wonder how often it can just hand over to the following sat in the same plane? That could make the coordination somewhat simpler. As the earth rotates under the constellation, there will come a point where the ground station has to be handed off to the adjacent plane of sats to the East.There may be reasons for a ground station to switch "randomly" between any of the sats that it sees in its sky, but maybe just "next in plane", "hop Eastwards", and "hop Westwards" (for load balancing) is all that is needed?cheers, Martin
All that matters is that under central control, at some time T0, everyone switches to a new switching table, which was distributed in advance, and clearly anticipates any geometry changes. All packets that enter the system at time T>T0 use the new tables. I expect changes to occur about once every 30 seconds or so. (Just looking at 5400 seconds orbital period, 80 sats per plane (as has been proposed upthread) and doing it every half-interval).
Most of the ground stations will be static, or nearly so (a car doesn't move far vs orbital speeds). If you want to talk to an IP address, this can be translated to a lat/long, then calculate the best routing to the least busy sat that's visible to that location. The system needs to maintain a table of IP vs lat/long, but most of these won't change very rapidly.
If 4,000 sats each have a direct link to 25 other sats that's 100,000 links. ISTM this state info could be broadcast across the network quite easily - 1mbps reserved on each link would propagate the info in seconds to every other node, I'd think.
Quote from: meekGee on 01/21/2015 11:01 pmAll that matters is that under central control, at some time T0, everyone switches to a new switching table, which was distributed in advance, and clearly anticipates any geometry changes. All packets that enter the system at time T>T0 use the new tables. I expect changes to occur about once every 30 seconds or so. (Just looking at 5400 seconds orbital period, 80 sats per plane (as has been proposed upthread) and doing it every half-interval).I think that probably doesn't work as different people will have different weather blowing through and have different obstructions to deal with.Quote from: MP99 on 01/21/2015 10:08 pmMost of the ground stations will be static, or nearly so (a car doesn't move far vs orbital speeds). If you want to talk to an IP address, this can be translated to a lat/long, then calculate the best routing to the least busy sat that's visible to that location. The system needs to maintain a table of IP vs lat/long, but most of these won't change very rapidly.I think this also probably doesn't work as lat/long won't allow you to infer a priori which satellite can reach someone, for similar reasons.Quote from: MP99 on 01/21/2015 10:08 pmIf 4,000 sats each have a direct link to 25 other sats that's 100,000 links. ISTM this state info could be broadcast across the network quite easily - 1mbps reserved on each link would propagate the info in seconds to every other node, I'd think.The volume of data seems quite large though - probably hundreds of millions of stations changing every few seconds. That gets untenable fast. And updates would still have a race where they would be behind some of the packets.All this state information seems like something you'd want to keep localized to a few satellites, the ones that would be the possible handoff candidates. They can forward the traffic if they receive it erroneously and send a redirect message to the source satellite (like an ICMP redirect).Can probably also parcel up the Earth into IPv6 prefixes well enough to let you generate a set of possible satellites for a given address. IPv4 can be supported via RFC6877 (meaning ignored entirely in the satellites).This scheme wouldn't directly handle mobile stations, but that can be separately handled a number of ways. A special set of prefixes could be set aside for stations known to be mobile and SpaceX could market that at a premium and actually do the work to track them (along with the global routing tables). A few hundred thousand of those would probably work. A few hundred million, probably not. Another option would be a VPN type system to allow the wifi on the plane to not have to renumber every few minutes by tunneling somewhere else.
Quote from: MP99 on 01/21/2015 10:08 pmMost of the ground stations will be static, or nearly so (a car doesn't move far vs orbital speeds). If you want to talk to an IP address, this can be translated to a lat/long, then calculate the best routing to the least busy sat that's visible to that location. The system needs to maintain a table of IP vs lat/long, but most of these won't change very rapidly.I think this also probably doesn't work as lat/long won't allow you to infer a priori which satellite can reach someone, for similar reasons.
Quote from: MP99 on 01/21/2015 10:08 pmIf 4,000 sats each have a direct link to 25 other sats that's 100,000 links. ISTM this state info could be broadcast across the network quite easily - 1mbps reserved on each link would propagate the info in seconds to every other node, I'd think.The volume of data seems quite large though - probably hundreds of millions of stations changing every few seconds. That gets untenable fast. And updates would still have a race where they would be behind some of the packets.