Greek Research Team breaks the memory wall and create the world's fastest RAM
32585 upvotes
768 comments
EdajimaHeihachi
3 days ago
ellines.com
Greek Research Team breaks the memory wall and create the world's fastest RAM
rseasmith
1
3 days ago

Hello and welcome to /r/science!

You may see more removed comments in this thread than you are used to seeing elsewhere on reddit. On /r/science we have strict comment rules designed to keep the discussion on topic and about the posted study and related research. This means that comments that attempt to confirm/deny the research with personal anecdotes, jokes, memes, or other off-topic or low-effort comments are likely to be removed.

​Because it can be frustrating to type out a comment only to have it removed or to come to a thread looking for discussion and see lots of removed comments, please take time to review our comment rules before posting.

If you're looking for a place to have a more relaxed discussion of science-related breakthroughs and news, check out our sister subreddit /r/EverythingScience.

joeflux
4417
3 days ago

the proposed RAM cell deploys a monolithically integrated

It's a single thing.

InP optical Flip-Flop

It can store 0 or 1 and changed with light.

and a Semiconductor optical amplifier-Mach–Zehnder Interferometer

It uses a laser beam. The laser beam is split into two. One beam goes through the memory cell. By combining the two beams again, you can see if the light has been affected (phase shifted) by going through the memory cell and thus whether it was storing 0 or 1.

TheOnlyBliebervik
714
3 days ago

Thanks dude

[deleted]
708
3 days ago

[removed]

[deleted]
796
3 days ago

[removed]

[deleted]
117
3 days ago

[removed]

[deleted]
84
3 days ago

[removed]

[deleted]
53
3 days ago

[removed]

[deleted]
33
3 days ago

[removed]

[deleted]
1
3 days ago

[removed]

[deleted]
28
3 days ago

[removed]

[deleted]
48
3 days ago

[removed]

[deleted]
2
3 days ago

[removed]

[deleted]
17
3 days ago

[removed]

[deleted]
7
3 days ago

[removed]

[deleted]
17
3 days ago

[removed]

[deleted]
1
3 days ago

[removed]

[deleted]
5
3 days ago

[removed]

[deleted]
1
3 days ago

[removed]

[deleted]
9
3 days ago

[removed]

[deleted]
7
3 days ago

[removed]

[deleted]
8
3 days ago

[removed]

[deleted]
9
3 days ago

[removed]

[deleted]
8
3 days ago

[removed]

[deleted]
1
3 days ago

[removed]

[deleted]
2
3 days ago

[removed]

[deleted]
2
3 days ago

[removed]

trui92
454
3 days ago

Downside is size. These devices are large, relative to their electronic counterparts. Probably around 100 to a 1000 times larger for a single cell. The work is nice, but I don’t immediately see them being implemented into commercial solutions.

deelowe
313
3 days ago

Silicon photonics are just getting started. Hopefully, feature sizes will shrink quickly as manufacturing is sorted.

dininx
104
3 days ago

Think this is actually indium phosphide, also just getting started

deelowe
9
3 days ago

Thanks for the correction. Does it also use photolithography?

ukezi
3
3 days ago

Yes.

trui92
71
3 days ago

I know, but there are inherent limits in terms of bend radius of the Si waveguides and some other stuff. For the bend radius: if the turn is too sharp, too much light will leak away. Hence you will not find filters smaller than 10 um x 10 um, which would be the smallest ring filters. If you look at cascades MZIs, you approach 100 um x 100 um. The amount of functionality you would be able to squeeze into an electronics chip of that size is huge in comparison.

Either technology has its purpose, with Si Photonics currently for tele- & datacom. But I remain skeptical to optically address memory. However, I could be wrong. Probably, there is a niche application, but I don’t think it is any type of RAM that is comparable to that currently used in consumer computers. So it is not fair to compare it as such without giving context: yes it might be faster, but it won’t replace current consumer-pc ram due to it’s size and cost limitations.

rants_silently
79
3 days ago

Reddit constantly blows my mind with how many smart people there are. The fact that there are multiple people on this post alone that know how this tech works and can have articulate conversation about it is crazy.

Dazzrr
15
3 days ago

You and me both

msew
11
3 days ago

Not me. I come here to try and find daily proof that I should not send the message to have this solar system expunged.

Humanity gets a pass today for this.

DrunkenCodeMonkey
46
3 days ago

There's a difference between the future of a concept like "mathematical engine" and physical concepts like storing memory using interferometry on silicon.

Photonics on general might find ways around the physical limits of silicon, but silicon photonics have quite clear limitations, which the poster above you was addressing, that will not be solved.

Using computers as the comparison: vacuum tubes where not miniaturized. They can't be. Rather, other physical properties where used to create transistors. Here we are looking at the use of specific physical properties. We can see the limit of their miniaturization potential.

Iamonlyhereforthis
3
3 days ago

Isn't fair to say that we may see this in our computers soon, we are talking in major speed gains that could possibly means we will need less transistors, since we can replace the calculation of many per second to this one which could do as many calculations with less?

Plusran
161
3 days ago

That's how it goes. Applications where speed matters and size doesn't will benefit. Remember early computers were well over 1000 times larger than they are now.

Steve_the_Stevedore
66
3 days ago

That's how it goes. Applications where speed matters and size doesn't will benefit. Remember early computers were well over 1000 times larger than they are now.

I would put it a different way: The same computing power was probably way more than a 1000 times bigger, but todays super computers are way bigger than back then. The space we dedicate to computers today is huge. We have warehouse complexes filled with them. So clearly we are okay with reserving space for our computing power as long as it pays off. So this technology doesn't even neet to get to the point were it fits a desktop case or even a server rack. If it's fast enough we will make room for it.

ThrowMeAway11117
25
3 days ago

It does however need to be at least 100x faster if it's 100x larger, or lots cheaper to adjust for speed:size ratio. Otherwise it's still not a good investment for large warehouse sized processing banks.

derpetyherpderp
26
3 days ago

Not if the speed is a selling point in itself. There could be applications that are infeasible with today's memory speed which are now made possible.

Hypothesis_Null
21
3 days ago

Speed itself will be hampered by too big of a size difference.

Our computers run at 4GHz today. That's 4 clock cycles every nanosecond. Light can only travel about 1 foot in a nanosecond. So every 3 inches away from the process your RAM is, and every 3 inches of path length within them to do a memory read, means an extra clock cycle delay for each operation.

It's not the largest of considerations, but it's not necessarily a trivial one either.

Cinderheart
5
3 days ago

I hate that the speed of light is so slow.

Gibborim
2
3 days ago

It is even slower than the poster is claiming! That is the speed of light in free space, but all this computing is being done in materials that slow the propagation of electric/magnetic fields considerably!

meneldal2
3
3 days ago

It is already a problem for the high speed connections between the cpu and external devices, which led to a requirement on PCIe lines length to avoid the clocks being out of sync.

Thanks-For_The-Gold
16
3 days ago

I'm sure the algo traders would sacrifice as much space as necessary for even 5% faster processing, as would some national defence applications.

Comic_Book_Cowboy
3
3 days ago

This was my immediate thought on what kind of application this would be worthwhile for. They spend crazy amounts of money on their server locations just to gain a nanosecond or two on their trade times

lothpendragon
2
3 days ago

Try 0.05% 😖

TheNamelessKing
2
3 days ago

Scientific applications like the Square Kilometre Array would benefit enormously from super fast storage.

Aggro4Dayz
5
3 days ago

It absolutely doesn't need to scale speed linearly with size to be useful. It probably won't replace memory in server farms yet, but there are applications where it would be a good bost. Remote lamdas for example where you don't need a ton of memory but you do need it to be fast.

martixy
4
3 days ago

Also absurd, pointless computations of mathematical constants to insane precision.

garimus
2
3 days ago

That was a very awesome read. Thanks for the link!

parkerSquare
1
3 days ago

What about power consumption? That’s usually a limiting factor too. I haven’t read the article yet (because the site times out for me).

trui92
3
3 days ago

In principle, electronics win on a single device basis. They do not consume power in either ON or OFF state, they only consume when changing from on to the other. Given the application is datatransfer, it will switch a lot to tranfer all those 001010111001 ...

But my gut-feeling says that a single laser will consume more than a single standard RAM cell. However, you can split the light into multiple parts, and thus drive many optical cells with one lasers. Possibly a 100 optical cells with one laser. I still think electronics would win on with 100 electronic memory cells versus one laser.

But is it worthwhile to compare? I said in another comment that the application would be different between these technologies. Trying to replicate pc-level RAM memory in terms of sizes and amount of cells will definetly consume more power. You are talking millions of transistors... typical lasers have a lasing threshold current of 5-50 mA. Imagine a million lasers, and your wattage becomes very high!!

[deleted]
48
3 days ago

[removed]

[deleted]
18
3 days ago

[removed]

[deleted]
24
3 days ago

[removed]

[deleted]
2
3 days ago

[removed]

[deleted]
9
3 days ago

[removed]

[deleted]
5
3 days ago

[removed]

ghostofexatorp
3
3 days ago

We need an idiot redux for every post like this.

Good job

Nonstopbaseball826
1
3 days ago

Wish this was at the top, thanks!!

banammockHana
1
3 days ago

Ooh Ooh now do Quantum Mechanics! XD

avenlanzer
1
3 days ago

You da MVP

daniel_ricciardo
1
3 days ago

What?

clboisvert14
1
3 days ago

One of the best /r/eli5

newgrasser
1
3 days ago

Thanks for breaking that down.

I have a dumb question, why split the laser beam at all? Why not just pass it through the memory cell (unsplit) and see if anything comes out, thereby determining if 0 or 1?

joeflux
2
3 days ago

Passing the light beam through the memory cell with a “1“ in it makes the light travel just a tiny tiny bit slower. The effect is small, rather than being some obvious thing that you could detect with your eye.

By combining the two laser beams, you get a big obvious pattern that you can you see with your own eyes, showing whether the light in one path has been slightly delayed or not.

Google image search for interference pattern (sorry on phone now)

Old_Grau
1
3 days ago

Someone give this man a PR job nao!

habaneraSAUCE
1
3 days ago

The latter two are Layman-appreciated, but I fear the man/woman who couldn't figure out what "monolithically integrated" meant :P.

Gracias!

w1n5t0nM1k3y
3434
3 days ago

Its says the write speed is 10Gb/s but that seems quite low. DDR4 can achieve over 17 GB/s. Maybe they mean 10 GT/s, which on a 64 bit bus would give a transfer speed of 80 GB/s.

Gorny1
3073
3 days ago

I think the headline is misleading. In the article it states, that they built optical RAM (o-RAM) which is different from normal electronic RAM. In our PCs is usually electronic RAM. They've built the fastest optical RAM yet and are looking forward to better versions that can outperform electronic ones.

At least thats what I understood by reading the article (english is not my native language).

morewubwub
1793
3 days ago

the write speed is about 1/4 of ddr4 or ddr3. The real benefit is the power consumption. In cpus this would allow a smaller fabrication size due to the wild decrease in heat produced. Im not sure what the RAMifications are for memory.

https://en.wikipedia.org/wiki/Nanophotonic_resonator#Examples/applications

"The use of optical signals versus electrical signals is a 300% decrease in power consumption"

The wikipedia source actually describes it as 1/300th the power consumption

Loafly
935
3 days ago

Wht does 300% decrease mean? Does it produce power?

tvks
686
3 days ago

So instead of electricity to flip bits, light pulses are used. The energy to generate the light is 33% of the energy used to flip a bit electronically.

Loafly
671
3 days ago

Id argue that a 66% decrease would make more sense.

Thank you explaining

crusherpoi
307
3 days ago

correct me if im wrong but i think you can say either:
-66% (implying subtraction)
or
300% decrease (implying division)

(edit: i was wrong, here's why:

a) 200% of the original value is ax2
200% of 20= 40. whenever you see "of" think multiplication
b) +200% is ax3
20+200%(200% of twenty, of course)= 60

so a three times power consumption increase is ax3 which can be written as 300% of current consumption

a three times power consumption decrease is a/3 which is 33% of current consumption

so we can say either

minus 66% power(subtraction)

or

33% of the original power(division)

by saying 300% decrease we are wrongly trying to state the same as saying 33% of the original. its a very informal statement that shouldnt be used)

XLNBot
252
3 days ago

Nah, 100% decrease means that it reached 0. So there's no point in removing more than 100%. While adding more than 100% makes sense in many scenarios

crusherpoi
58
3 days ago

there, i figured it out, sorry for the confusion

a) 200% hundread of the original value is ax2
200% of 20= 40. whenever you see "of" think multiplication
b) +200% is ax3
20+200%(200% of twenty of course)= 60

so a three times power consumption increase is ax3 which can be written as 300% of current consumption

a three times power consumption decrease is a/3 which is 33% of current consumption

so we can say either

minus 66% power(subtraction)

or

33% of the original power(division)

by saying 300% decrease we are wrongly trying to state the same as saying 33% of the original. its a very informal statement that shouldnt be used

SirCB85
42
3 days ago

One could conclude that it is a very convoluted way of justifying the use of big numbers that look good on a headline.

hampsted
17
3 days ago

To make this more clear for everyone, let’s just use a formula:

(|new - old| / old) * 100 = percent change

If new > old, it’s an increase. If new < old, it’s a decrease.

gamma286
14
3 days ago

You got it!

deftspyder
1
3 days ago

I know which one we're using for the marketing campaign!

Painfreeday
1
3 days ago

This guy plays path of exile

WaiYanMyintMo
1
3 days ago

Ur right

EmuRommel
58
3 days ago

I might be wrong but that doesn't feel intuitive at all. What would a 50% decrease mean then? Going from 100 to 50 or from 100 to 200?

deadtorrent
32
3 days ago

I might be wrong but that doesn’t feel intuitive at all. What would a 50% decrease mean then? Going from 100 to 50 or from 100 to 200?

Yes, you decreased the number by 50% of it’s starting value.

e: I haven’t had coffee. 100 to 200 would be a 100% INCREASE, but 200 to 100 is a 50% decrease.

EmuRommel
45
3 days ago

/r/InclusiveOr

but if that's so then saying 300% decrease makes no sense whatsoever.

LMeire
1
3 days ago

It's actually very intuitive if you look at it right. just remember that % isn't just the name of an operation, it literally means "per 100". So 50% = 50 of every 100

LXNDSHARK
9
3 days ago

You are wrong unfortunately. Percent change is always addition/subtraction.

EngineeringNeverEnds
1
3 days ago

How it should be used and how it is used are different things though, so he's not wrong in that it's almost certainly how it was used in this context.

LXNDSHARK
3
3 days ago

Used completely incorrectly is still incorrect. He might be right about what they meant to write, but it literally means something else than what they wrote.

hopbel
1
3 days ago

Nope. "Increased by 200%" is addition but "200% original capacity" would mean double (multiplication).

But yeah, using percentages above 100% gets confusing because there's multiple ways to use it. I wish people just used simple multipliers

LXNDSHARK
1
3 days ago

Nope. Percent and percent change are different things.

Yodiddlyyo
1
3 days ago

Uh no. If somethings price increases 100%, we understand that it doubles. Sure, you an say that it's being added, but it's being multiplied. That what fractions are. If something is 25% less, you know it's x0. 75. Or you can x0.25 the subtract that amount, but you're still mulyiplying/dividing. Percent is never adding/subtracting.

teslafolife
2
3 days ago

What!?!! Then what on earth is a 100% decrease. No change?

Raytional
2
3 days ago

It would be complete loss. 100% decrease is all of it. So 100% decrease of 20 is a decrease of 20, bringing it to 0.

SandyDelights
2
3 days ago

This is correct, but the latter is not really a usage that I would consider common, and would avoid it.

“Decreased by” would imply subtraction, while a flat “x% decrease” would mean division.

Lilscribby
1
3 days ago

Big numbers good

mkusanagi
1
3 days ago

Count on a journalist to use the more confusing measurement because it has a bigger number.

irlingStarcher
1
3 days ago

Good on you to realize the mistake and explain it. Thanks!

googlemehard
1
3 days ago

Or just say three times less power consumption

crusherpoi
2
3 days ago

that's the informal way, that technically isn't correct but gets the message across. its not three times less, its a third of the original

xZwei
32
3 days ago

They meant: It takes less electricity to produce “light pulses” than to flip a single bit.

You are correct that both require electricity still.

About 1/3 the amount of electricity used to flip a bit, if I understand correctly.

norieeega
12
3 days ago
jimx117
1
3 days ago

It's true quomm

Fuanshin
8
3 days ago

That's like, um, 66% decrease?

zefy_zef
2
3 days ago

So to use the same amount of power could be double the write speed?

frugalerthingsinlife
1
3 days ago

Massless particles - light - are a lot easier to push around than electrons which have very little mass, but some mass. There is a bit of loss in the lines, but I think most of that loss would be in the transistors. (I took a course in transistor design in undergrad, but I still don't really understand how the damn things work.)

But basically, they've replaced the 2 transistors of a NAND gate with this:

...a monolithically integrated InP optical Flip-Flop and a Semiconductor optical amplifier-Mach–Zehnder Interferometer, On/Off switch configured to operate as a strongly saturated differentially-biased access gate.

Simple, right?

flumphit
34
3 days ago

I read it as +300% power efficiency (uses 1/4 the power), but it’s not something a numbers-savvy person would say, so it’s ambiguous.

acog
29
3 days ago

I'd argue it's not merely ambiguous, it's an error.

If you had a 100% power consumption decrease, that means something now consumes no power. You can't decrease more than that unless it became a generator.

jarfil
3
3 days ago

Power efficiency = work performed / power consumption.

Using 4x less power (1/4 of the power) to perform the same work is a 300% efficiency increase.

_____no____
5
3 days ago

is a 300% efficiency increase.

Yes, but it's NOT a "300% decrease in power consumption", which is what the article says.

nuclearusa16120
7
3 days ago

Internet Journalist =/= Electrical Engineer. Not saying its an excuse, but it seems to be the likeliest reason.

Engineer says: "We've seen as much as a 300% increase in power efficiency over traditional RAM."

Reporter: "So what does that mean for consumers?"

Engineer: "It uses way less power."

*What gets reported: * "NEW RAM USES 300% LESS POWER."

MattieShoes
2
3 days ago

4x less power

There's the problem -- 4x less power is not the same as 1/4 of the power. You can only use 1x less power without becoming a generator.

jarfil
1
3 days ago

In popular parlance, "x times less" == "1/x times".

flumphit
2
3 days ago

Oh, it’s wrong, no question. The only open question is what the author intended (and completely failed) to communicate. And that’s less than clear.

phillosopherp
1
3 days ago

I don't know if I would say that it's an error, I think it's an attempt to use numbers to generate headlines, as well as possible investment.

padeca07
14
3 days ago

I think it's just an improper use of percentages in trying to communicate the math here.

Exodus111
6
3 days ago

Dumb way to use a number.

It should be 1/3 of the power consumption.

YoSupMan
2
3 days ago

"Optical vs. electrical is a 300% decrease in power consumption" doesn't make sense. Let Po be power consumption of optical and Pe be power consumption of electrical. Then
Po = Pe - 300%*Pe = -200% * Pe = -2 * Pe.

So, if Pe = 1 (whatever units or scaling you want), then this is saying that Po = -2 (same units and scaling).

Since Po and Pe are both > 0, Po can never be more than 100% less than Pe. Pe may be a 300% increase from Po (i.e., Pe = Po + 300%*Po = 400%*Po = 4*Po), but that does not mean that Po is a 300% decrease from Pe. Rather, Po is 1/4 Pe, or 25% of Pe.

If X is N times the size of Y (X = N*Y) , one cannot say that Y is N times smaller than X. Instead, one would say that Y is 1/Nth of X [Y = (1/N)*X = X/N].

Demojen
1
3 days ago

Higher speeds, longer lifespan, lower temperatures, lower power cost

zephyrprime
1
3 days ago

Means 75% decrease in power.

hopbel
1
3 days ago

My guess is they mean "3 times less power" which translates to 1/3

ashrasmun
1
3 days ago

300% decrease is a quite moronic way to put it. I bet they meant it's 1/3 of the current consumption.

g4r37h
1
3 days ago

Using positive scales to denote decreases is a poor choice from the author. Saying something is 300% automatically means “3x as much” so to then have to transform that into “1/3 as much” is incredibly clumsy.

PartyboobBoobytrap
23
3 days ago

Yeah RAM is constantly paging and huge power cost.

beerdude26
11
3 days ago

Server DDR2 was like 10 watts per stick

Obilis
1
3 days ago

I'd like to one day have a laptop that doesn't give me second-degree burns when I run games and put it on top of my lap.

The power consumption benefits are of secondary importance to me.

whatsnottakentoo
5
3 days ago

The less electricity used the less heat generated.

Obilis
4
3 days ago

Yes, that's what I'm saying: Everyone is talking about the power consumption, but I'm interested in what this implies about less heat generation.

Sniixed
3
3 days ago

power consumption is directly producing heat in a pc/laptop..

Aside from Display emitting light/ RBG / Fans

MattieShoes
1
3 days ago

Aside from Display emitting light/ RBG / Fans

Those produce waste heat too... Especially RBG

🔥

[deleted]
5
3 days ago

[removed]

NutDestroyer
5
3 days ago

Does the RAM's power consumption/heat produced actually make a practical difference when it comes to CPU fabrication size? My impression is that CPUs today are largely limited by the amount of heat they are producing on their own, and that with RAM being located away from the CPU, the two parts aren't really interacting with each other.

Are you saying that we could use this oRAM to replace registers and the cache in the CPU? Maybe I've misunderstood what you're getting at.

morewubwub
3
3 days ago

No I'm saying that cpu fabrication size is limited by their heat dissipation ability. Given that cpus have hit a floor on how small they can get due to thermal limitations perhaps ram has its own thermal floor effecting how fast/small it can be fabricated.

NutDestroyer
1
3 days ago

That makes sense. Thanks for the clarification.

OnyX824
2
3 days ago

Right. For remembering the advantages of photonics, the acronym SWaP is used. Speed, weight, and power.

msrichson
1
3 days ago

What is the application for this? If you are trying to create a supercomputer, you can manage the heat of the more powerful electronic RAM. Or is it smaller handhelds could gain longer battery lives?

MaskMan191
1
3 days ago

decrease in heat produced

I didn't really understand much of anything else you said, but I understood that! Bring on the O-RAM, my PC heats my room to 12 degrees Fahrenheit above ambient!

aperture_synce
1
3 days ago

Huh, that could be super useful for smartwatches and other small electronics, in it's current form.

deponent
1
3 days ago

So amazon, Google, Facebook, Microsoft and Apple are going to save a ton of money soon.

harugane
1
3 days ago

Came here for the puns.

accountno543210
1
3 days ago

So, what is the real monthly savings on power for the average consumer? Why should we care?

morewubwub
1
3 days ago

more power == more heat generated (energy lost in transmission) getting comparable performance with dramatically less heat creation means we can eventually(with more research) get higher performance out of O-ram then what we currently get out of electrical ram. my comment has little to do with electricity costs and more to do with the theoretical physics limitations we've run into in cpus and how those limits probably exist for ram as well.

devildocjames
1
3 days ago

(☞゚ヮ゚)☞

etahea
1
3 days ago

Wikipedia was updated to list a 66.7% decrease.

That's no better. Here's what their reference says:

By taking advantage of the strong confinement of photons and carriers and allowing heat to escape efficiently, we have realized all-optical RAMs with a power consumption of only 30 nW, which is more than 300 times lower than the previous record, and have achieved continuous operation.

Spot the mistake? :-)

Steve_Esh2020
2
3 days ago

What is your native language?

hshaw737
1
3 days ago

A misleading headline on /r/science? I am shocked!

FineAndFit
1
3 days ago

Optical PCs are maybe 50 years away. We barely have functioning optical transistors in the works.

I'm okay with conventional electronic computers for my lifetime.

willis936
121
3 days ago

My interpretation of the article is that they’re running at GHz (which is 10 G cycles/s in DDR). So multiply that by your bus width to get raw throughput. If it’s 64 bits wide across two channels it would be 160 GB/s.

Also GT/s and Gb/s are interchangeable. The capital B is important. Nomenclature gets more complicated when you move away from non-binary baseband modulations.

xcver2
71
3 days ago

I do not know what is to interpret here. The article clearly states that it is an all optical RAM, reached 10Gb/s which is a 100% increase to previous versions.

Genetic_outlier
60
3 days ago

I think the confusion is just that the Articles title is completely incorrect. This is not the fastest Ram in the world and people are trying to figure out how it is. It's the fastest Optical Ram in the world which is 4 times slower than current consumer ram. Unless this just has much more potential than its current Ram Technologies this just is not news.

GooeyPod
39
3 days ago

The headline outlined the wrong part. It's definitely news, because it's the fastest optical RAM and paves the way for getting the much less power-hungry optical RAM comparable to electronic levels.

Multi_Grain_Cheerios
13
3 days ago

It's definitely news. People just don't bother to read past the misleading headline.

makemejelly49
1
3 days ago

The only way to make that kind of RAM faster would be if the case the RAM was installed in to could contain a vacuum. oRAM uses light pulses to flip bits, and light moves at the speed of light when it's in a vacuum. so, you'd need to create a computer that could be basically like outer space.

antiduh
23
3 days ago

I really don't like treating bits/sec and T/sec as interchangeable, because it confuses the process. How much total throughput you get is:

  • how many bits you do per transaction.
  • how many transactions you can do per clock.
  • how many clocks you can do per second.
  • how many channels you have.

DDR memory is always 2 T per clock. So DDR4-4000 memory runs at 2000 MHz, 2 transactions per clock, my bus is 64 bit so 64 bits per transaction, and I have two channels, so I can do 2 transactions simultaneously.

Thus 2000 MHz * 2T/Hz * 64 bits/T * 2 == 512 Gbits/sec, or 64 GBytes/sec.

Or 32 GBytes/sec if you want to consider single channel performance. Wikipedia lays it straight out:

https://en.wikipedia.org/wiki/Memory_bandwidth

The only confusing part is understanding what the X means in "DDRN-X".

X = clocks/sec * transactions /clock = 2000 * 2 in my case.

o11c
1
3 days ago

Generally, you also have to worry about 8b/10b (or whatever) encoding.

For a single lane, 10 GT/s = 8 Gb/s. And for networking, the carrier frequency, measured in GHz, is yet another different number (much higher).

antiduh
3
3 days ago

Generally, you also have to worry about 8b/10b (or whatever) encoding.

8b/10b encoding ("bit expansion coding") is only used in systems that perform clock recovery; it's used to ensure that there are at least some number of transitions in the signal every so many bits. In 8b/10b encoding, you can guarantee at least one transition every 10 observed bits (conveying only 8 bits worth of information).

Systems that perform clock recovery do so using phase-locked loops (PLLs). A PLL is used to adjust the phase of a sampling clock to the phase of the incoming signal. If the incoming signal has no transitions in it, though, then the PLL has nothing to work off of. Hence, the need for bit expansion encoding in clock-recovered signals.

DDR does not use 8b/10b encoding, so there's no reason to mention in in the context of calculating the memory bandwidth of DDR memory.

DDR is a parallel bus architecture, not a serial architecture. DDR does not have "lanes". DDR does have a bus width, sometimes called a "line". An x86 motherboard usually has a DDR bus width of 64 bits, per channel. Modern Video Cards have a DDR bus width of 384 bits, sometime 512 bits.

What you're thinking of is PCI Express. PCI Express uses bit expansion coding, uses clock recovery, and names its bus width in terms of lanes. PCI Express 1.0 and 2.0 use 8b/10b coding, and PCI Express 3.0 uses 128b/132b encoding.

For a single lane, 10 GT/s = 8 Gb/s. And for networking, the carrier frequency, measured in GHz, is yet another different number (much higher).

There are some contexts in which these statements are somewhat true, but this doesn't apply to memory bandwidth calculation for DDR memory. In the context of PCIExpress, which is what I think you're thinking of, this still isn't right, because PCI Express 3.0 doesn't use 8b/10b encoding anymore, so your 10 T = 8 b math doesn't work anymore.

If you want to calculate the effective throughput of any networking standard, you have to consider much more than what you've stated.

  • How many channel states/RF states/RF symbols are used by the signalling system? EG, QAM-256 has 256 possible RF symbols (phase and amplitude combinations for QAM), and thus can transmit 8 bits per symbol.
  • How many states/symbols are transmitted per second?
  • When making a transmission, how many symbols are wasted for receiver tuning time (agc, channel estimation, receiver power-up time)?
  • How are bits converted to symbols? Is a bit expansion code being used? Is some forward error correction like Reed Solomon being used?
  • How are bits used to transmit user data? Are there headers? Trailers? Packing bits?
unripenedfruit
9
3 days ago

Actually, I think here GHz and Gb/s may also interchangeable, and they use both in the article.

If 1 cycle carries 1 bit of information, then the frequency (Hz) is the same as the bitrate (bit/s)

The article clearly mentions that it is optical ram, running at read and write speeds of 10Gb/s, which is a 100% increase over previous demonstrations, and later states that optical RAMs have previously only been shown to operate up to 5GHz.

However, experimentally demonstrated optical RAMs have been limited to up to 5 GHz only, failing to validate the speed advantages over electronics.

dieortin
5
3 days ago

If 1 cycle carries 1 bit of information

Why would a cycle only carry 1 bit?

unripenedfruit
10
3 days ago

The bitrate and frequency are only equal if 1 cycle carries 1 bit.

It doesn't have to and it's not always the case, but in the article they seem to use bitrate and frequency interchangeably.

ashchild_
1
3 days ago

RAM that transfers 1 bit a clock cycle is worthless for anything practical. The moment you spilled out of your cache, performance would start to look like you dug up an 8086

unripenedfruit
4
3 days ago

That doesn't change the fact that the article uses the two interchangeabley.

For them to be the same, it needs to be 1 bit per cycle. That's all I'm pointing out.

kinshadow
1
3 days ago

The T stands for Transactions. It is the same as bus frequency for SDRAM and 2x frequency for DDR.

ukezi
47
3 days ago

That 10Gb/s is a single RAM cell. That is enormously much. The 25.6 GB/s=204Gb/s are achieved by using 64 rows of cells in parallel. That would get you 640Gb/s=80GB/s with the same memory configuration. DDR5 is specified up to 64GB/s at the moment.

olderaccount
31
3 days ago

World's fastest optical RAM. The title of both this post and the article are missing the word optical. This is not the fastest RAM. But it is twice as fast as the previous fastest optical RAM.

EmilyU1F984
14
3 days ago

If this is just a single cell, then it would be faster than a single cell of DDR4. If it's a whole stick then it would be slower.

imadnsn
2
3 days ago

Is it a bank or a DIMM?

III-V
2
3 days ago

Neither. I wasn't able to find specific info to confirm this, but it's just a cell or several cells as a proof of concept. You would need thousands to make one IC, of which you'd need several to make a DIMM. And we'd be talking 1970s-era capacity, best case.

There's research (this work), and then there's development (scaling it up to make it manufacturable).

Clitoris_Thief
2
3 days ago

Wait I thought we were talking about dance dance revolution

Accujack
1
3 days ago

Only for line dances.

fnordstar
19
3 days ago

This is talking about a single memory cell isn't it? Actual ram chips access many of these in parallel. So, not comparable.

zephyrprime
11
3 days ago

They say for "a cell" so I'm sure they mean 10gb/s on a 1 bit bus.

Also, DDR4 has a cell speed on it is only 1/8th it's tick speed. DDR4-3200 only has a cell speed of 400mhz. The whole reason "double date rate" tech was needed was because the cell speed of ram is so low. This optical ram apparently has an actual speed of 10 ghz.

xaminmo
2
3 days ago

A DDR4 DIMM accesses multiple cells in parallel. A single cell is not 17GB/sec.

SaftigMo
1
3 days ago

It's likely in memory storage, which is actually solid state but considered RAM. There's currently a revolution for internal cloud storage going on, because regular storage is not fast enough. Think hospitals and billion dollar company intranet storage.

Ruben_NL
1
3 days ago

GT? GigaTerra?

jarfil
2
3 days ago

Not sure if you're trying to mock Tera or Terra, but in this case it's GigaTransactions or GigaTransfers.

mn_aspie
1
3 days ago

Members of the Wireless and Photonic Systems and Networks (WinPhoS) research group of the Aristotle University of Thessaloniki, created the fastest all-optical RAM cell in the world. (emphasis mine)

It's literally the first sentence in the article and everyone is arguing over their interpretation of what it means. Yes the title is misleading (clickbait?) but the claim is clearly about optical RAM cells.

MistaSmiles
1
3 days ago

You can transmit faster via working in parallel; its trival to increase your GB/s this way. There is probably some nuanced metric they beat.

Sayfog
218
3 days ago

Here's the associated paper if anyone has access: https://www.osapublishing.org/abstract.cfm?uri=CLEO_SI-2019-STh4N.5

Unfortunately my uni's OSA access doesn't have it, so if anyone can check and reveal details in an "Explain like I'm an EE (almost) grad" that would be appreciated.

The article in OP mentions Mach Zehnder Inferometer - are setting that up so the two paths phase cancel for a 0 and add for 1? or is it something more complex? Also is it 100% optical with regards to addressing and control or are those electrical with just the data being optical?

Alaeuwu
40
3 days ago

Hi, I'm an undergraduate student and I'm just getting in the world of academia, I was wondering what's the difference between these osapublishings and the ones available for free at https://arxiv.org/? Thank you and sorry this is off-subject

beeeel
33
3 days ago

Some journals are open-access: the papers published there, and often the data for those papers, is available for free. Arxiv is an open access pre-print repository - people put their papers there for critique before they're published.

Some journals work on a subscription model: they might pay authors for publishing, but they always charge for access. More journals are moving towards the open access model in recent years.

BigGayMusic
8
3 days ago

Journals never (insofar as I've seen, experienced, and heard after a decade in higher learning--dealing with this particular nightmare is a bit of a pet project) pay the academic who wrote the paper, nor do they pay the academics who peer reviewed it. If you think this sounds insane, considering academic publishing is a multi-billion dollar industry, you should take up the cause of open access academic publishing. It aims to keep taxpayer funded research in the hands of the people that pay for it, rather than behind thousand dollar subscriptions no individual could ever hope to afford. Some large private and public institutional libraries pay upwards of 60% of their entire yearly purchasing budget on journal subscriptions alone.

But who assures quality in open access journals, you ask? Well, we couldn't do any worse than the current system which is riddled with pay-for-publication schemes and publishing "headline grabbing" articles with major experimental flaws. Basically I think we should just remove the middle man and keep everything else: no one is getting paid for peer review now, how hard will it be to get people to keep doing review for free? Especially if they know the result won't be hidden in some extraordinarily expensive walled garden.

Edit:

I should also add that open access does not mean totally free in this context. Web hosting is expensive and hosting 1,000,000's of searchable documents requires serious hardware. However, rather than $5,000 a month for access to a single journal for an institution, it would be more like $1.00 per student per month for access to all the journals. This is a couple orders of magnitude of difference in pricing. Individuals could pay the same price for the same access. Thus, open access is not free, but is within reach of 99.9% of people; a pretty major bump on the current state of affairs.

Tsimshia
10
3 days ago

Arxiv and Biorxiv are where people put their papers either if they don’t want to submit them to a journal, or before they submit them to a journal.

It lets people be much more flexible with formatting, but is not peer reviewed. That means you have to be reading something by a reputable author rather than assuming the peer reviewers for a journal did a good job.

Many journals charge extra from the researcher to have their paper available for free (“open access”) but do not do anything if it was already available for free before they agreed to publish it.

Most journals require a subscription to add the full papers they publish.

Natanael_L
3
3 days ago

The free ones (open access) has a different model of funding

Jumbledcode
1
3 days ago

ArXiv stores pre-print articles. In general they are put up there by the researchers who produced them as a preliminary copy before the final peer-review and publication process.

quicklikeme
3
3 days ago

This is a link to the CLEO conference paper, which upon further reading gives the same results as the previously published optics letters paper: https://doi.org/10.1364/OL.44.001821.

AcidTWister
90
3 days ago

the proposed RAM cell deploys a monolithically integrated InP optical Flip-Flop and a Semiconductor optical amplifier-Mach–Zehnder Interferometer

This is just way over my head. Can I get an ELI5? Half of this sounds made up.

antiduh
74
3 days ago

InP refers to a material made out of indium phosphide. They made an optical-digital circuit called a flip flop out of it.

A flip flop is a circuit that has basic memory, and can be combined into larger circuits to do more interesting things. It holds its value until the 'change' pin on it becomes active, which is when it reads from its input and copies to its output.

For instance, if you had a serial signal that you received 1 bit at a time, but needed to aggregate into 8 bit chunks to store in a cpu register, you build a shift register out of a chain of flip flops.

A Mach-Zehnder interferometer measures the phase difference between two light inputs.

One light input could be the main signal. The other light input could be a phase reference that you tune every time you start receiving a new signal (burst of data).

So now, the input light could carry information by changing the phase of its sine waves (this is called phase modulation) , you could measure the phase difference by tuning your internal reference light source and putting both into a MZ interferometer, and then sample the out of that using a chain of InP flip flops in the form of a shift register, and now you get chunks of bits out.

Disclaimer: I'm a little out of my element in the optical realm, so I might have gotten some details wrong. I hope I helped understand the basic idea.

AcidTWister
9
3 days ago

Also helpful! I tried looking for InP and just kept getting asked if I meant "input".

antiduh
3
3 days ago

I searched for "inp optical" :)

kondec
2
3 days ago

Wouldn't you be able to store more data than on/off (1/0) on such a switch? So that it could in theory carry more data than a regular electronic transistor?

antiduh
3
3 days ago

Well, today we build electronics that are 2-ary ("binary"), because it is the most cost effective way to represent our data en-masse.

3-ary ("ternary") computers have been made before, but the economics aren't great, at the silicon level; increasing the information storage capacity costs a lot more silicon real estate than if you had a 2-ary design.

Is it possible? Probably. I think it'd be difficult to make a 3-ary optical flip flop, but probably could be done.

Is it worth doing? Probably not. Especially when you want the device to ultimately interface with a system that is still binary, which you'll want if you want this Optical Ram to be a drop-in replacement.

A 4-ary optical ram might be worthwhile though, because then bits evenly divide into optical ram cells - 2 bits per cell. Modern SSDs do this, by charging the SSD cell to 1 of 4 charge levels. So maybe that would be worth doing.

...

If you're talking about using light as a communications medium, then we certainly can do much better than 2-ary. QAM-256 is a 256-ary modulation system that you can implement over fiber optic by controlling the phase and amplitude (brightness) of your light signal, and now you can send 8 bits per optical "state" (phase and amplitude combination).

However, even in that space, it's always a tradeoff. As you divide your channel into larger numbers of states, it becomes more difficult to discriminate between adjacent states, leading to more bit errors. Which means you have to balance out the increase in bit errors with an increase in your Forward Error Correction redundant bits, which means you spend more of your raw physical bitrate on channel overhead in order to make the channel usable.

kondec
2
3 days ago

Fantastic answer, thank you.

amethystair
36
3 days ago

monolithically integrated = Fancy term for integrated circuit, or computer chip. It means it'll be easy to manufacture.
InP = Indium Phosphide, the material it's made of
optical Flip-Flop = A "flip flop" type circuit made of optical parts. A light switch could be considered a flip flop in a way; One input makes it turn on, one input makes it turn off. In a flip flop they're two separate buttons, but the concept is the same.
Semiconductor optical amplifier-Mach–Zehnder Interferometer - This is the most complex one, but basically you know how light can interfere with itself either constructively (making it brighter) or destructively (making it darker)? This basically detects that and turns the signal to electricity.

So that's the definition of all this, but what does it all mean? Current RAM uses transistors to store 1s and 0s. That's great, and we can make them really, really small, but transistors are slow compared to light. There's a time after you tell a memory cell to store a "1" where you can't access it, because you have no idea what it'll be. It might have finished storing it, or it might have the previous value. This circuit doesn't use transistors to store information; it uses light.

Let's go back to the Mach-Zehnder Interferometer. You shoot a laser into one part, and it splits the beam. One half of the split beam is then passed through something you want to analyze, usually a gas or temperature, as both of those things change the speed of light. The beams are then recombined, and you can measure the expected vs real brightness.

I haven't found a research paper from these researchers yet, but my best guess as to what they're doing is this: The laser is fired and split, and it passes through two semiconductors that change the speed of light in them based on the current applied. The brightness change after the beams are recombined is measured and determines whether it's set to 1 or 0. The biggest advantage I'd see of this is that it's nearly instant; the moment you change the electric charge on the semiconductor the result is available.

Based on DDR4's 2133Mhz speed, transistors take ~500 picoseconds (500 trillionths of a second) to set and reset. That's very fast, but optical semiconductors had 500 picosecond to 1 nanosecond response time back in 1998. I found a recent article that suggests speeds of ~100Ghz should be possible with this type of memory cell. That'd be 10 femtoseconds (10 quadrillionths of a second) to set or reset. Of course, that doesn't take addressing and other RAM responsibilities into account, but even adding those in it should be significantly faster than transistor technology.

Hopefully I was clear with all this, let me know if you have any other questions and I'll do my best to answer!

Sources:
https://en.wikipedia.org/wiki/Integrated_circuit
https://en.wikipedia.org/wiki/Indium_phosphide
https://en.wikipedia.org/wiki/Flip-flop_(electronics)
https://en.wikipedia.org/wiki/Mach%E2%80%93Zehnder_interferometer
https://www.osapublishing.org/josab/abstract.cfm?uri=josab-14-11-3204

AcidTWister
7
3 days ago

No, this is great. And it makes it sound even more exciting to be honest. I hadn't really grasped how fast this memory could be potentially, but this explanation makes it fairly clear. Thank you so much!

amethystair
1
3 days ago

I'm pretty excited to see where it goes too, I'm glad I could help! <3

Sayfog
3
3 days ago

Here's the paper link, my uni doesn't have access to all OSA article but if you do have access could you give some details?

https://www.osapublishing.org/abstract.cfm?uri=CLEO_SI-2019-STh4N.5

amethystair
1
3 days ago

I don't have access, unfortunately. I'd love if someone with access could chime in and let me know how close my educated guess was.

blankfilm
2
3 days ago

Thanks for the explanation.

So in theory this means non-RAM ICs can be built using optics as well? If so, that would be a major step forward in reducing power consumption and heat in CPUs and GPUs, leading to further miniaturization and computing applications we haven’t imagined yet.

Good times.

amethystair
2
3 days ago

I actually had that thought too while researching, it would be interesting to see if you could build a processor using this technology. From what I understand I don't see why you couldn't use this to build logic gates; there's two lasers you could modify and if you simply offset one of them slightly you could easily get or, nor, xor, and, & nand gates using the same tech but arranged differently. The future is going to be crazy!

Accujack
2
3 days ago

article that suggests speeds of ~100Ghz should be possible with this type of memory cell

...which means that RAM would suddenly be able to keep up with the CPU... effectively, the CPU would be able to access all the memory in the system at the same throughput as present cores access their L1 cache.

A CPU core built to use this RAM wouldn't use L1/L2 cache, either, rather it would just access the ORAM, which would free up a lot of die space for other purposes, and simplify designs because you don't need to worry about managing that cache, cache coherency between cores, etc.

For comparison with present systems, imagine that your favorite multi-core threadripper was built for ORAM - the equivalent of a modern dual socket 16 core system would be roughly the same performance as a (using present technology) single chip Threadripper with 32 cores/64 threads sharing an L1 cache the size of present server DRAM (if this were possible using non optical RAM/present silicon fabs).

That's not just twice as fast, that's an order of magnitude jump in performance.

amethystair
2
3 days ago

Cache would still have some benefits because it's physically closer, but if you were to design a motherboard with optical channels between the RAM and CPU instead of copper ones that would likely more than make up for the distance. I'm excited to see where all this optical computing technology goes, it seems like it's going to be the next big step in computing.

antiduh
2
3 days ago

Be sure to remember that throughput and latency are separate though.

If you wanted to actually get 100 GHz performance on the latency measurements, the oram chip would have to be located c / 100 GHZ / 2 meters away == ~1.5 mm. Any more than that, and you can't make timing because you're waiting for the propagation delay of light.

Accujack
2
3 days ago

Right, but 100Ghz isn't necessary. Modern chips run at most 5 Ghz, so let's say 10 Ghz throughput or a 15mm gap. Latency is already worse than that anyway, with copper wire communicating at e.g. 0.6c.

In any case it's a huge speedup.

[deleted]
2
3 days ago

[deleted]

[deleted]
4
3 days ago

[removed]

tjmann96
1
3 days ago

Technically every single word in every single language was "made up."

michel_v
88
3 days ago

Is that new way to make RAM impervious to Rowhammer attacks?

geeky_username
88
3 days ago

It's optical memory, so yes, something like rowhammer wouldn't work.

Though who knows what kinds of hacks will be available for optical

[deleted]
11
3 days ago

[removed]

nicktohzyu
6
3 days ago

Will likely have to exploit the electronic side though, the optical aspect is pretty isolated

elkshadow5
23
3 days ago

What’s Rowhammer?

michel_v
51
3 days ago

Basically a type of attack where a process exploits the way conventional RAM is set up physically to flip desired bits on and off, and then manages to infer the contents of memory owned by another process (something that's normally not possible).

Sterlingz
17
3 days ago

So, ELI5? Actually nevermind

michel_v
126
3 days ago

Basically like knocking on a wall to find where it's hollow, except more advanced so you manage to deduce the layout of the room behind the wall, just by knocking on the wall.

OMGwtfNOTnow
19
3 days ago

Super awesome explanation. Thank you!

michel_v
4
3 days ago

Thanks!

[deleted]
7
3 days ago

[removed]

[deleted]
2
3 days ago

[removed]

Pipolinoo
8
3 days ago

Haha wow, that was spot-on for an ELI5. Well done!

dkyguy1995
2
3 days ago

This is a really good eli5

meneldal2
2
3 days ago

Rowhammer works because the cells are charged so they have an effect on their neighbors (which is supposed to be small but turns out it isn't).

A photon has no charge, you can't make it affect its neighbors in that way.

willis936
45
3 days ago

Is memory bandwidth actually a serious cause of stalls? I was under the impression that main memory latency is what caused stalls (and what most of the transistors in processors are spent mitigating).

endless_sea_of_stars
48
3 days ago

Depends on the application. For example in memory databases are much more reliant on transfer rates as opposed to latency.

bradn
10
3 days ago

I don't know if I buy that - often you have to bounce through tree links several times before you can reach the data element you're looking for.

That's if you're only following one reference - if you have to go a few deep in linked structures, multiply the bounces by that before you can find a bigger chunk of data you actually want.

Latency is probably much more important for most database workloads.

endless_sea_of_stars
13
3 days ago

I should have been more clear. I was referring to analytic (OLAP) databases. For example the Vertipaq engine (Sql Server Analysis Services and PowerBI) streams gigabytes of compressed data blocks to generate aggregates (sum, avg, max, etc). Latency doesn't matter so much in that case.

mer_mer
6
3 days ago

Yup. That's why IBM has 8 way SMT (hyperthreading) in Power9. Database operations are constantly stalling.

SandyDelights
1
3 days ago

I believe he’s referring to solid state drives (in-memory databases) rather than actual databases. In which case, your latency would already be reduced, so further reduction in latency times would have less of a benefit, as opposed to disk storage, where latency is much more of a restriction.

bradn
1
3 days ago

Whether in RAM, flash, or spinning rust, latency is still a bigger deal for the kind of database I describe than throughput.

SandyDelights
2
3 days ago

I was merely pointing out what the dude was saying, because the lack of a dash in “in-memory database” confused me for a good five seconds.

Throughput can be more relevant in certain systems, as he’s clarified in another comment.

Shitty__Math
16
3 days ago

Almost every GPGPU algorithm is fully bottlenecked by raw memory bandwidth these days. It is 1:1, and memory bandwidth is not scaling with GPU compute power so over time they are becoming more and more memory starved. They are coming up with memory compression to keep up (see delta compression with maxwell on). If you look at 2000 series GPUs you see almost every cache size was doubled this generation, they are having trouble feeding it.

AnyoneButWe
4
3 days ago

We do industrial image processing and don't care that much about latency. Most operations are very linear in memory access. But we do care about bandwidth as our cameras output roughly 5GB/s of raw data. With about 5 operations (linear scans of the whole image) per frame, that's a lot for customer computers.

willis936
1
3 days ago

That’s because so many transistors are already dedicated to handling the latency issues of main memory. If main memory could give data in less than 10 cycles then more than half of the silicon would be opened up to more processing.

SIMD applications do eat throughput though.

[deleted]
51
3 days ago

[removed]

[deleted]
35
3 days ago

[removed]

[deleted]
7
3 days ago

[removed]

[deleted]
30
3 days ago

[removed]

[deleted]
15
3 days ago

[removed]

[deleted]
10
3 days ago

[removed]

[deleted]
8
3 days ago

[removed]

[deleted]
8
3 days ago

[removed]

[deleted]
37
3 days ago

[removed]

FinalFortune_
4
3 days ago

Hey guys, i've studied light-based computing before and i'm here to say why this will never get outside of a lab.

I'll keep this short. Simply put, this is ram but works using lasers (see u/joeflux's comment)

The problem is, light is massive! You think, "how can a ray of light be big? It's everywhere!"

So... how big a laser light is varies based on it's place on the spectrum (oversimplified, its color)

Example: Red light is about 600nm in length, blue light is about 400nm in length, etc.

Companies like AMD are starting to use 7nm transistors now. 7 nanometers. In the same space this ram takes up there can be 85 times the transistors inside a traditional microchip. It won't work because to even be useful this stuff will take up half a room.

TL;DR, ELI5: This ram uses light. Light is big. Traditional silicon isn't. This "ram" has to be very big to be useful.

joeflux
2
3 days ago

Fwiw I think they are using light at 1550nm, just going by the "InP" bit (indium phosphide).

Smallest you could reasonably get with current technology would be around 100nm (eg argon fluoride maybe?)

You can't really rely on "future technology" here because if there was an easy way to get a laser with a very small wavelength, then then chip manufacturers would be using it to make their chips. You wouldn't ever overtake them.

Having said that, size isn't everything. There are many other considerations. Eg Wasted heat and power consumption (which is the same thing really). This is a huge deal, because it stops you just making a really dense brick with a huge amount of capacity.

Would you really care if your ram chips in your pc or data centre were a thousand times larger (which is only 10 times larger in each direction).

Robyx
3
3 days ago

Technically, this also counts as solid-state, I believe.

As opposed to a non solid-state memory that would use moving parts or magnetic cores.

For example, optical relays are solid-state.

But I see what you mean. As far as I know, the optical part is not vulnerable, but the transducers are.

Kxf7Checkmate
3
3 days ago

So correct me if I'm wrong as I've not heard of optical RAM before, but if it works on light and a standard pc runs on electricity are you not going to need to convert this at some point? That could be an entirely different bottleneck.

TheOnlyBliebervik
5
3 days ago

Maybe one day when the entire computer is light based. For now, you can use optical transceivers which convert back to electrical signals.

saluksic
4
3 days ago

Fibers transmit most of the internet around the world as light already, so it’s apparently already solved.

RoastedWaffleNuts
3
3 days ago

Covering electrical signal to light has been solved. Doing so at low power and RAM speed is a different beast.

TheCombinatorRace
1
3 days ago

If you are talking about data being lost by power being removed, then that is fine. RAM is volitile memory which loses data when there is no power. RAM is only meant to be a cache for data that is currently being computed on.

Mike312
2
3 days ago

So this is basically RAM operating with a laser instead of electrons? Any chance this has some relation to how a CPU could theoretically operate with lasers? Like, could the same technology apply there as well? Does it already apply there and they took it from that? Or is light-based computing passe and everyone is all in on quantum these days?

sangeyashou
2
3 days ago

I read somewhere that in order to translate to light and back takes up more processing power which cancels the advantage of using light in the first place in terms of speed. Is this true?

GedEllus214
2
3 days ago

So this is a STNG Isoliear chip?

https://memory-alpha.fandom.com/wiki/Isolinear_chip

Wolfmilf
1
3 days ago

Would something like cosmic rays be able to flip o-RAM bits like with conventional RAM?

DiscombobulatedSalt2
2
3 days ago

It still can, but the mechanism is different and less frequent. It depends on the material properties (scintillation).

SinisterBajaWrap
1
3 days ago

Yes, but it'd be less likely.

[deleted]
1
3 days ago

[deleted]

[deleted]
1
3 days ago

[removed]

RyanWilliams704
1
3 days ago

How much dedicated ram does it take to run the server