Wednesday 20 June 2012

NBN, stuxnet and Security: It's worse than you can believe

What  did US Intelligence tell the Australian Government about Real Network Security when a chinese vendor was vetoed  as supplier of NBN (central?) switches?
Now that we have O'bama admitting "we did Stuxnet, with a little help", we know that they aren't just capable and active, but aware of higher level attacks and defences: you never admit to your highest-level capability.

Yesterday I read two pieces that gave me pause: the first, the US Navy replacing Windows with Linux for an armed drone was hopeful, the other should frighten anyone who understands Security: there's now a market in Zero-Day vulnerabilities.

The things the new-world of the NBN has to protect us against just got a lot worse than you can imagine.

Links in that article:

Closing:
For once, I’m hoping Bruce Schneier is wrong. But, I doubt it. I’ve already read where high-level contestants who normally compete in Pwn2Own aren’t any more. They would rather keep what they found secret, and make the big bucks.
I've written previously that a Cyberwar will be won an lost in 3 minutes  and that the NBN is a central element in a National Security and Economic Security protection strategy for Australia.

Since the O'bama disclosure, Governments and essential Utilities and Businesses should be required to run multiple diverse systems, at least for desktops so they aren't susceptible to monoculture failures: think Irish Potato Famine but 10-100 times worse.

The US Navy announcing they'd needed to rehost a secure, armed platform (move from Windows to Linux) seems to suggest that even their operational/combat networks get compromised (remind you of Stuxnet? "air-gaps" are good but no defence against a determined, capable attacker).

That they've publicly stated "we chose linux when it absolutely had to be trusted" (my words) might be them hinting, none too subtly, that every other Government and Military should follow their lead: Move critical systems off Windows because even we can't keep them "ours".

The other news, that there are both providers and brokers for "zero-day" attacks ($50,000-$250,000 a go for significant platforms) says:

  • there are people or services who can validate claims of "original zero-day exploits".
  • this is far from new, we ("joe public") are just finding out about it now, and
  • there will already be a whole stash of "zero-day" attacks in the hands of Governments and potentially others.
  • Don't think this is just about National Security Espionage, it's also about Commercial Espionage and infiltration, targeted Financial System attacks, 'ransomware' and much more: its opens the way for Organised Criminal Activity way beyond simple Identity Theft in scale and returns. Would the Drug Cartels and Arms Traders be in on this? Who can say... Sophisticated Bad Guys with a ton of cash, no scruples and able to buy pretty much any technical talent they want. Not a bet I'd take.
But there's a subtlety that's not brought out in the article:
  • many more "zero-day" exploits will be bought, than uniquely exist.
  • This isn't just one vendor selling the same thing to multiple buyers. [A great scam until just hustle the wrong people with Big Military Weapons - and then you're dead.]
A wise Intelligence Agency will have its own crew finding "zero-day" exploits and will be wanting to identify all the people working in the area who might be as capable as themselves and also may just save themselves the cost of developing exploits that require lots of leg-work. We know that high-level Intelligence Agencies routinely attempted to recruit outstanding Mathematicians and Systems folk - like Alan Turing during WWII, good commanders will put them on tasks that need more intellectual horsepower and leave (relative) donkey-work to others.

If someone offers an Agency a "zero-day" exploit they've just found, for the Agency to refuse it means they've already developed or acquired it elsewhere: this gives away a bunch about what the Agency does and doesn't know. The Agency will always buy to hide its capability. But not before doing a little 'background check' on the seller/discovers to avoid the scam of "lets sell the same toy to everyone".

Also, if an Agency truly believes that a seller/discoverer is legit and will only pass on its work once, it's worth its while to acquire real and dangerous new exploits to prevent "others" from getting their hands on them. If those third-parties are accessible and like-minded, an Agency might attempt to "bring them into the fold" - to hire them and at least take them off the market.

You'd think that an Agency would harden its own systems against against its whole portfolio of "zero-day" exploits, would track the public registries and even create "honey-trap" systems for those exploits: systems that are just secure against the exploit, but allow the attacker into a fake environment containing false/misleading information (mis-information from highly 'reliable'/'credible' sources is a counter-espionage coup de grace) - or even initiate active counter-attacks or less invasive track-back and monitoring.

It is guaranteed that all the many Intelligence Agencies (if the US has a unit, so will everyone in the G20 and maybe beyond, especially those in Intelligence sharing partnerships) know the cost of finding zero-day exploits against many types of targets. As in, call tell you a dollar value. Part of the drivers for third-party purchases will be additional resource/capability, but also very pragmatic: it's cheaper. You'll always pay 3rd-parties less than it costs you :-)

How do I know this for sure? In 1998, Robert Morris (senior) talked at the AUUG Conference in Sydney. He'd 'retired' from the NSA, (and said "you never retire from them") and after lengthy service at senior levels, knew intimately what could be said publicly and what was never to be said.

He calmly talked about the US mis-placing nuclear weapons (more times than I remembered being widely reported), described a really neat hack that let them listen in on terrestrial phone conversation (put a satellite or plane in the line of a microwave link).

And he said unequivocally, "It's costs us $10MM for an 'intercept'". Not only did that imply they had the tools and techniques to break most or all codes, but did so at "industrial scale". It wasn't a little cottage industry like Bletchley Park had been, but large enough that they absolutely knew how to cost it and would bill-back those resources when requested. Generals and others would have to consider what certain information might be worth to them before blindly requesting it.

Things are much worse than you can imagine, now there are acknowledged Cyberattack Units and a market in "zero-day" exploits. We can only know after the event just how bad things have been - like the Cold War's Nuclear "incidents".

BTW, while its possible companies or individuals might deliberately insert backdoors or vulnerabilities into critical software, I find it highly unlikely. The next plodder that comes along to fix a bug in your code (you don't stay after scoring the jackpot) might just wreck it. If you're really good, people will never notice what you've done.

While there are some people that are that good, there are a huge number who only think they are. They will be caught and dealt with, either via the normal Law Enforcement and court system or by covert activities.

A far more plausible and probable occurrence is for a vendor of "proprietary systems" (closed source, not Open Source) to bow to pressure from Friendly Governments to allow controlled administrator access, a variation of the Ranum Conjecture, whereby undercover agents infiltrate critical work-teams and insert malicious code.

Whatever Intelligence Agencies are capable of, large Organised Crime is potentially capable of as well. The difference is, "can we make a buck off this". They will do different things and target different systems.

The NBN will become our first line of defence against Cyberattack: let's get everyone behind it both publicly and privately.

Monday 18 June 2012

NBN: Will Apple's Next Big Thing "Break the Internet" as we know it?

Will Apple, in 2013, release its next Game Changer for Television following on from the iPod, iPhone, and iPad?
If they do, will that break the Internet as we know it when 50-250MM people trying to stream a World Cup final?

Nobody can supply Terrabit server links, let alone afford them. To reinvent watching TV, Apple has to reinvent its distribution over the Internet.

The surprising thing is we were first on the cusp of wide-scale "Video-on-Demand" in 1993.
Can, twenty years later, we get there this time?


Walter Isaacson in his HBR piece, "The Real Leadership Lessons of Steve Jobs" says:
In looking for industries or categories ripe for disruption, Jobs always asked who was making products more complicated than they should be. In 2001 portable music players ... , leading to the iPod and the iTunes Store. Mobile phones were next. ... At the end of his career he was setting his sights on the television industry, which had made it almost impossible for people to click on a simple device to watch what they wanted when they wanted.
Even when he was dying, Jobs set his sights on disrupting more industries. He had a vision for turning textbooks into artistic creations that anyone with a Mac could fashion and craft—something that Apple announced in January 2012. He also dreamed of producing magical tools for digital photography and ways to make television simple and personal. Those, no doubt, will come as well.
This doesn't just pose a problem that can be solved by running fibre to every home, or Who can afford the Plan at home, it's much bigger:
  • On-demand, or interactive TV, delivered over the general Internet cannot be done from One Big Datacentre, it just doesn't scale.
  • Streaming TV over IP links to G3/G4 mobile devices with individual connections does scale at either the radio-link, the backhaul/distribution links or the head-end.
The simple-minded network demands will drown both the NBN and Turnbull's opportunistic pseudo-NBN.

In their "How will the Internet Scale?" whitepaper, Content Delivery Network (CDN) provider Akamai, begins with:
Consider a viewing audience of 50 million simultaneous viewers around the world for an event such as a World Cup playoff game. An encoding rate of 2 Mbps is required to provide TV-like quality for the delivery of the game over IP. Thus, the bandwidth requirements for this single event are 100 Tbps. If there were more viewers or if DVD (at ~5 Mbps) or high definition (HD) (at ~10 Mbps) quality were required, then the bandwidth requirements would be even larger.
Is there any hope that such traffic levels be supported by the Internet?
And adds:
Because of the centralized CDN’s limited deployment footprint, servers are often far from end users. As such, distance-induced latency will ultimately limit throughput, meaning that overall quality will suffer. In addition, network congestion and capacity problems further impact throughput, and these problems, coupled with the greater distance between server and end user, create additional opportunities for packet loss to occur, further reducing quality. For a live stream, this will result in a poor quality stream, and for on-demand content, such as a movie download, it essentially removes the on-demand nature of the content, as the download will take longer than the time required to view the content. Ultimately, “quality” will be defined by end users using two simple criteria—does it look good, and is it on-demand/immediate?
Concluding, unsurprisingly, that their "hammer" can crack this "nut":
This could be done by deploying 20 servers (each capable of delivering 1 Gbps) in each of 5,000 locations within edge networks. Additional capacity can be added by deploying into PCs and set-top boxes. Ultimately, a distributed server deployment into thousands of locations means that Akamai can achieve the 100 Tbps goal, whereas the centralized model, with dozens of locations, cannot.
Akamai notes that Verisign acquired Kontiki Peer-to-Peer Software (P2P) in 2006 to address this problem. If the Internet distribution channels were 'flat', P2P  might work, but they are hierarchical and asymmetrical, in reality they are small networks in ISP server-rooms with long point-to-point links back to the premises.  Akamai's view is the P2P networks require a CDN style Control Layer to work well enough.

In their "State of the Internet" [use archives] report for Q1, 2011, Akamai cites these speeds:
... research has shown that the term broadband has varying definitions across the globe – Canadian regulators are targeting 5 mbps download speeds, whereas the european Commission believes citizens need download rates of 30 mbps, while peak speeds of at least 12 mbps are the goal of australia’s National broadband Network. As such, we believe that redefining the definition of broadband within the report to 4 mbps would be too United States-centric, and we will not be doing so at this time.
As the quantity of HD-quality media increases over time, and the consumption of that media increases, end users are likely to require ever-increasing amounts of bandwidth. a connection speed of 2 mbps is arguably sufficient for standard-definition TV-quality content, and 5 mbps for standard-definition DVD quality video content, while blu- ray (1080p) video content has a maximum video bit rate of 40 mbps, according to the blu-ray FAQ.
There are multiple challenges inherent for wide-scale Television delivery over the Internet:
  • Will the notional customer line-access rate even support the streaming rate?
  • Can the customer achieve sustained sufficient download rates from their ISP for either streaming or load-and-play use?
    • Will the service work when they want it - Busy Hour?
  • Multiple technical factors influence the sustained download rates:
    • Links need to be characterised by a triplet {speed, latency, error-rate} not 'speed'.
    • local loop congestion
    • ISP backhaul congestion
    • backbone capacity
    • End-End latency from player to head-end
    • Link Quality and total packet loss
  • Can the backbone, backhaul and distribution networks support full Busy Hour demand?
    • Telcos already know that "surprises" like the Japanese Earthquake/Tsunami, which are not unlike a co-ordinated Distributed Denial of Service attack, can bring an under-dimensioned network down in minutes...
    • With hundreds of millions of native Video devices spread through the Internet, these "surprise" events will trigger storms, the like of which we haven't seen before.
  • Can ISP networks and servers sustained full Busy Hour demand?
  • Can ISP's and the various lower-level networks support multiple topologies and technical solutions?

CISCO in their "Visual Networking Index 2011-2016" (VNI) report have a more nuanced and detailed model with exponential growth (Compound Annual Growth or GAGR). They also flag distribution of video as a major growth challenge for ISP's and backbone providers.

CISCO writes these headlines in its Executive Summary:
Global IP traffic has increased eightfold over the past 5 years, and will increase nearly fourfold over the next 5 years. Overall, IP traffic will grow at a compound annual growth rate (CAGR) of 29 percent from 2011 to 2016.
In 2016, the gigabyte equivalent of all movies ever made will cross the global Internet every 3 minutes.
The number of devices connected to IP networks will be nearly three times as high as the global population in 2016. There will be nearly three networked devices per capita in 2016, up from one networked device per capita in 2011. Driven in part by the increase in devices and the capabilities of those devices, IP traffic per capita will reach 15 gigabytes per capita in 2016, up from 4 gigabytes per capita in 2011.
A growing amount of Internet traffic is originating with non-PC devices. In 2011, only 6 percent of consumer Internet traffic originated with non-PC devices, but by 2016 the non-PC share of consumer Internet traffic will grow to 19 percent. PC-originated traffic will grow at a CAGR of 26 percent, while TVs, tablets, smartphones, and machine-to-machine (M2M) modules will have traffic growth rates of 77 percent, 129 percent, 119 percent, and 86 percent, respectively.
Busy-hour traffic is growing more rapidly than average traffic. Busy-hour traffic will increase nearly fivefold by 2016, while average traffic will increase nearly fourfold. Busy-hour Internet traffic will reach 720 Tbps in 2016, the equivalent of 600 million people streaming a high-definition video continuously. 
Global Internet Video Highlights
It would take over 6 million years to watch the amount of video that will cross global IP networks each month in 2016. Every second, 1.2 million minutes of video content will cross the network in 2016.
Globally, Internet video traffic will be 54 percent of all consumer Internet traffic in 2016, up from 51 percent in 2011. This does not include the amount of video exchanged through P2P file sharing. The sum of all forms of video (TV, video on demand [VoD], Internet, and P2P) will continue to be approximately 86 percent of global consumer traffic by 2016. [emphasis added]
Internet video to TV doubled in 2011. Internet video to TV will continue to grow at a rapid pace, increasing sixfold by 2016. Internet video to TV will be 11 percent of consumer Internet video traffic in 2016, up from 8 percent in 2011.
Video-on-demand traffic will triple by 2016. The amount of VoD traffic in 2016 will be equivalent to 4 billion DVDs per month.
High-definition video-on-demand surpassed standard-definition VoD by the end of 2011. By 2016, high-definition Internet video will comprise 79 percent of VoD.
In their modelling, CISCO use considerably lower rates for video rates than Akamai (~1Mbps) with an expectation of 7%/year reduction in required bandwidth (halving bandwidth every 10 years). But I didn't notice a concomitant demand for increased definition and frame-rate - which will drive video bandwidth demand upwards much faster than encoding improvements drive them down.

Perhaps we'll stay at around 4 Mbps...

Neither CISCO nor Akamai model for a "Disruptive Event", like Apple rolling-out a Video iPod...

History shows previous attempts at wide-scale "Video on Demand" have floundered.
In 1993 Oracle, as documented by Fortune Magazine, tried to build a centralised video service (4Mbps) based around the nCube massively parallel processor. An SGI system was estimated at $2,000/user which was 10-times cheaper than an IBM mainframe. A longer, more financially focussed history corroborates the story.

What isn't said in the stories is that the processing model was for the remote-control to command the server, so the database needed to pause, rewind, slow/fast forward etc the streams of every TV. There was no local buffering device to reduce the server problem to "mere streaming". Probably because consumer hard-disks were ~100MB (200secs @ 4Mbps) at the time and probably not able to stream at full rate. A local server with 4Gb storage would've been an uneconomic $5-10,000.
 He (Larry Ellison, CEO) says the nCube2 computer, made up of as many as 8,192 microprocessors, will be able to deliver video on demand to 30,000 users simultaneously by early 1994, at a capital cost of $600 per viewer. The next-generation nCube3, due in early 1995, will pack 65,000 microprocessors into a box the size of a walk-in closet and will handle 150,000 concurrent users at $300 apiece. 
Why did these attempts by large, highly-motivated, well-funded, technically-savvy companies with a track-record of success fail with very large pots of gold waiting for the first to crack the problem?

I surmise it was the aggregate head-end bandwidth demands: 30,000 users at 4Mbps is 120Gbps and the per-premise cost of the network installation.

Even with current technologies, building a reliable, replicated head-end with that capacity is a stretch, albeit not that hard with 10Gbps ethernet now available. Using the then current, and well known, 100 channel cable TV systems, distributing via coax or fibre to 100,000 premises was possible. But as we know from the NBN roll-out, "premises passes" is not nearly the same as "premises connected". Consumers take time to enrol in new services, as is well explained by Rogers, "Diffusion of Innovation" theory.

The business model would've assumed an over-subscription rate, ie. at Busy Hour only a fraction of subscribers would be accessing Video-on-Demand content. Thus a single central facility could've supported a town of 1 million people, with 1-in-four houses connected [75,000] and a Busy-Hour viewing rate of 40%.

If Apple trots out a "Game-Changer for Television", with on-demand delivery over the Internet, the current growth projections of CISCO and Akamai will be wildly pessimistic.

New networks like the NBN will be radically under-dimensioned by 2015, or at least the ISP's, their Interconnects and backhauls will be...

The GPON fabric of the NBN may handle 2.88Gbps aggregate downstream, and 5-10Mbps per household is well within the access speed of even the slowest offered service, 12/1Mbps.

But how are the VLANs organised on the NBN Layer-2 delivery system? VLAN Id's are 12-bit, or limited to 4,096.
I haven't read the detail of how many distinct services can be simultaneously be streamed per fibre and per Fibre Distribution Area. A 50% household take-up means

When I've talked to Network Engineers about the problem of streaming video over the Internet, they've agreed with my initial reaction:
  • Dimensioning the head-end or server-room of any sizeable network for a central distribution model is expensive and technically challenging,
  • Designing a complete network for live-streaming/download to every end-point of 4-8Mbps sustained (in Busy Hour) is very expensive.
  • Isn't this exactly the problem that multi-cast was designed for?
The NBN's Layer-2 VLAN-in-VLAN solution should be trivially capable of dedicating one VLAN, with it's 4096 sub-'channels' to video multicast, able to be split out by the Fibre Network Termination Unit (NTU) - not unlike the system Transact built in the ACT.

Users behaviour, their use of Video services, can be controlled via pricing:
  • The equivalent of "Free-to-Air" channels can be multicast and included in the cost of all packages, and
  • Video-on-Demand can be priced at normal per GB pricing, plus the Service Provider subscription fee.
As now with Free-to-Air, viewers can program PVR's to timeshift programs very affordably.

In answer to the implied Akamai question at the start:
  • What server and network resources/bandwidth do you need to stream a live event (in SD, HD and 3-D) to anyone and everyone that wants to watch it?
With multicast, under 20Mbps, because you let the network multiply the traffic at the last possible point.

Otherwise, it sure looks like a Data Tsunami that will drown even the NBN.

Sunday 17 June 2012

NBN: Needed for "Smart Grid" and other New Century Industries

With the release by IBM of "Australia's Digital Future to 2050" by Phil Ruthven of IBISworld there is now some very good modelling to say "The Internet Changes Everything", with some Industry bulwarks of the past set to disappear or radically shrink and others, "New Century Industries" (my words), that don't yet exist at scale, will come to the fore of our economy.

Previous pieces that link the NBN/Smart-Internet with "Negawatt" programs are now more relevant:
I'd appreciate someone with real Maths and Analytic Economics ability to tear apart my simple assumptions and ideas to create something achievable and provably economically advantageous to Australia. But until then, I have to let my simple arguments try to carry the day.

I think "the Negawatt" is a Big Idea that's been 25 years in the making - and only now are all the pieces assembled to enable it, plus there are strong enough economic, social and political forces demanding it.

The idea of "negawatts" fits into the current Green and ALP programs of Renewable Energy, Carbon Emission Reduction, Economic Stimulus and positioning the economy for the 21st Century, not the 20th. The Nationals and country Independents would also probably be on-side because it supports their constituents and businesses and allows their communities access to New Century Industries.

I'm not sure if the Abbott Opposition could successfully argue against saving money and increasing Productivity and Economic Efficiency, especially for "the smaller end of town" and against creating a widespread economic base for Energy-sector investment/returns - but they've surprised everyone before, possibly including themselves, so I'm not going to speculate on that here.

Can we have a "Smart Grid" and other "Smart Internet" facilities without the NBN, especially without Fibre-to-the-Premises?

The difference between a Turnbull-style pseudo-NBN, opportunistically built from heterogeneous, diverse components, and a majority "pure fibre" NBN is reliability and operational costs.
It's the same as the difference between Microsoft and Apple computing products.

Ten years ago Microsoft was King of I.T. and dominated the market with Apple an irrelevancy.
Since then Apple has come along with multiple game-changing inventions that haven't just redefined the world of computing, but of entertainment and work practices. Along the way they've become one of the largest, most valuable companies in the world.

Ten years ago, nobody would've backed a high-concept, premium-priced product strategy because Microsoft had 'conclusively' proved "Cheap and Cheerful" is what people wanted. Today, everyone should understand the power and desirability of the Apple model in every Industry, not just Technology and Entertainment.

Rudd/Conroy and the ALP may not have been aware of the antecedents or potential of their "Do it Right, First Time" approach to a National Network, but today which approach are investors backing with either wallets: Microsoft's "Cheap and Cheerful" or Apple's "Great Design"??

"Cheap and Cheerful" is all well and good until you need to absolutely rely on it - say for healthcare or energy production. These same factors applying for retail software products translate to National Networks.

There are very powerful reasons people are migrating in droves away from Microsoft platforms and avoiding their new offerings (e.g. Win 8 Mobile): too much pain, too complex, underwhelming performance, features, security, administration and considerably lower "value for money" in the mid-term. What you save in upfront costs you outlay more than ten-fold in making it work for you.

Perhaps a pseudo-NBN might work, but without a unified common infrastructure the technical barriers to entry are high, potentially insurmountable, and the market-access problems with considerably less than universal access will result in a needlessly crippled industry: exactly like Cable TV.

Our multiple overlaid mobile phone and ADSL2 networks  demonstrate exactly what our incumbent Telcos will do if left to their own devices: not just "not co-operate", but actively work towards disparate, incompatible systems.

Without the prospect of universal and guaranteed market access, no sane investor is going to enter the sector, rerunning the Cable TV debacle. Which denies Australia access to an emerging sector it desperately needs economically and which builds on one of our finest traits: inventiveness.

Australian markets seems to foster monopolies and duopolies - the reasons why don't matter.
We are stuck with it, like we are stuck with Coles and Woolies, Optus and Telstra and just a few brands of petrol. Without a common infrastructure, we are guaranteed to replicate the same set of fractured Telecomms enclaves we've seen for the last 25 years of deregulation.

A Turnbull pseudo-NBN will create a balkanised network with crippling access fees and incompatible or conflicting network access software from which we'll never recover, rerunning our Cable TV experience and preventing us from realising a profitable or useful "Smart Grid" and a host of other New Century Industries.

Saturday 2 June 2012

NBN: The Devil in the Technical Details

There's stuff that I don't know about the technical implementation of the NBN using Gigabit PON (Passive Optical Network) - and I'm not sure where to find them:
  • What sort of fibre are they using?
    • Can it be upgraded to normal Ethernet?
  • Does this fibre have a 20km range?
    • Is that 28dB loss or more in 20km?
  • How many dB loss do the splitters introduce?
  • The wikipedia page suggests that the 2.448Gbps downstream link is normally split into 32 subscriber fibres - and that it does that in 28db.
    • Is that a single splitter in-the-field? [low loss, high fan-out]
    • Or a number of splitters that will be deployed between the head-end and subscriber.
  • Will NBN Co be laying additional empty conduit/pipe in the pits it digs?
    • Or if using aerial cables (power poles), making provision to easily run additional fibre.
In 2000, Adrian Blake in Cooma paid AGL/Great Southern Energy to lay extra conduit in the trenches being dug to reticulate natural gas in the town. The "Dot Boom" became the "Dot Bust" before the project could be financed, so we'll never know how that would've worked. Ten years later, Telstra officially acquired the conduit so next-to-nothing as the company assets were sold.

Cooma was near the last town in that AGL project, so no other towns were targetted. Even so, it's unlikely that AGL would've unilaterally laid empty conduit "on spec", ie. without a contract.

Even Telstra, who intimately knew the costs and value of trenching, over the last 20 years didn't mandate empty conduit to be laid in all trenches opened. The engineering staff knew that there was a need in the immediate future for a change from copper to fibre in the customer distribution network, just as they'd made that transition since the first intercity optical fibre laid circa 1986. Yet nobody thought to prepare. How could that be?

The lesson was that nobody, inside or outside AGL and Telstra, had ever thought of the trenching as an asset that could be leveraged... At $25-$45,000/km for trenching [or ~$1M/sq.km in urban settings], that's a remarkable commercial oversight.

So, will NBN Co be laying empty conduit(s) alongside its new fibre?

I think it's hubris to assume that "this network will never fail, be damaged, need expansion/upgrade or need to be changed".
It's an obvious and cheap thing to do, but experience shows that Utilities Engineers and their managers haven't thought this way before.

And why am I dubious of GPON?

Because nobody who does digital networks uses it, everyone uses Ethernet.
That's what all commodity devices use and what chip/hardware vendors invest in and develop for.
We now have 10/40/100Gbps over Fibre defined for Ethernet.

Plus Ethernet-Fibre is available in WDM (Wave Division Multiplexing), allowing the capacity of existing fibres/networks to be upgraded in-place and on-demand. This maximises the utility of the cable-plant investment whilst optimising the investment in capacity with demand. Economically, this is close to a perfect situation.

In the mid-80's, Telecomms Engineers invented ATM (Asynchronous Transfer Mode) and Telco's + their vendors invested massively in building and deploying this technology. It was meant to be the future technology to underpin all converged digital services.
But like Fibre Channel, it's a technology that's obsolete and on the wane.

Backbone networks are universally being migrated away from ATM to IP and Ethernet. Whilst IP/Ethernet don't yet have all the QoS and manageability features large Telcos like, they are appearing.

How long before GPON is similarly out-moded and in dire need of upgrade? As soon as 10 years?

Will the NBN Co network allow that upgrade cheaply and easily, or in another 10 years will we have to spend another $20-$40B digging exactly the same trenches?

As said at the start, I don't know where to find out this technical and design information.
Any pointers welcome!