Chatting with Ben Segal - Podcast Episode 10 - Connecting Europe & CERN to the Internet

Jul 10, 2020

 

Episode 10 - Connecting Europe & CERN to the Internet

 

David: [00:00:00] I remember reading about IBM offering to pay for the "pipe" and the bandwith was one and a quarter Megabits per second ..

Ben: [00:00:08] 1.5

David: [00:00:09] 1.5 Megabits per second. 200 Kilobytes...

Ben: [00:00:12] It's called a T1 line,.

David: [00:00:14] Right.

Ben: [00:00:14] Yeah. And it was like gold and it must have cost .. I don't know - it'd be an interesting thing to research just what it cost. But I would think one or two million a year, I would think.

David: [00:00:25] Yeah. And this was a transatlantic line.

Ben: [00:00:28] Yes, it was going to .. again it .. having to remember, might have been to MIT, I forget where.

David: [00:00:36] Probably somewhere in New England. Because that was the closest.

Ben: [00:00:39] Right. Right. Then, of course, things in the States happened very quickly, NSFnet and so on. But no, in those days to get connected to the Internet, you simply rang up Joyce Reynolds in USC. "Hello, Joyce. Can I have some some IP addresses? I'd like class B". She gave us two Class B networks just like that! Before that, because I was only allowed to work on the inside of CERN and I couldn't have external connectivity, I numbered the network myself. I used class A 100, so we were on network 100 all over CERN. Another thing is worth mentioning at that time. The Ethernet at CERN, because there was no IP, wasn't routed.

David: [00:01:24] Oh my God..

Ben: [00:01:24] It was one big broadcast network...

David: [00:01:29] It was a cluster of collisions, right.

Ben: [00:01:31] Yeah. So any single node anywhere on the CERN Ethernet that decided to broadcast...

David: [00:01:39] Oh, my..

Ben: [00:01:39] So in fact, there was a person - who was a friend of mine actually, in the main network group - part of his job was monitoring this thing to see broadcast storms happening. And zap the person. And you could get broadcast storms from various malfunctions. So we didn't have DNS or anything like that. If you needed, you know, we used something called ARP. Every system had a host .. an IP host table.

David: [00:02:07] That's your DNS.

Ben: [00:02:10] But on the Ethernet level we used ARP. So .. we had certain applications which were using a raw Ethernet. Raw. You can take a raw Ethernet packet, then define your own protocol and go with that. So it was a zoo really. It was hard to maintain and of course it grew quite quickly. But before we had IP allowed, properly, there were no routers, no routers doing the main job of routing..

David: [00:02:39] How many machines are we talking? It's like ..

Ben: [00:02:44] Like a thousand. PC's and stuff..

David: [00:02:45] Oh, OK. Well, that's a fair amount of collisions.

Ben: [00:02:49] No, no, no. Now remember, there were switches..

David: [00:02:53] Oh, OK.

Ben: [00:02:55] Were there switches...?

David: [00:02:57] Switches are dumb in that they just forward everything they see.

Ben: [00:03:00] No, no, no, no. Wait a moment. No, there were switches. No, no. It was on each segment, the collisions are on each segment. So a guy on one side of it doesn't see the collisions on the other. But it was a mess.


Other news

Cookies help us deliver our services. By using our services, you agree to our use of cookies.