Założenia:
- Nie podłączono zewnętrznych obwodów (innych niż obwód programowania, który naszym zdaniem jest poprawny).
- UC nie jest uszkodzony.
- Przez niszczenie mam na myśli uwalnianie niebieskiego dymu śmierci, a nie zamienianie go w oprogramowanie.
- To „normalny” uC. Nie jakieś dziwne urządzenie o specjalnym przeznaczeniu 1 na milion.
Czy ktoś kiedykolwiek widział coś takiego? Jak to jest możliwe?
Tło:
Mówca spotkania, któremu asystowałem, powiedział, że jest to możliwe (a nawet nie takie trudne), a niektórzy inni zgodzili się z nim. Nigdy tego nie widziałem, a kiedy zapytałem ich, jak to możliwe, nie dostałem prawdziwej odpowiedzi. Jestem teraz bardzo ciekawy i chciałbym uzyskać informacje zwrotne.
microcontroller
software
damage
Juan Carlos
źródło
źródło
Odpowiedzi:
Oczywiście, że możesz, dzięki HCF instrukcją !
To powiedziawszy, mówię, że jest to niemożliwe bez zewnętrznego obwodu, oprócz zasilania i tym podobnych.
Nawet dołączenie niektórych niepoprawnie wadliwych połączeń prawdopodobnie go nie przerwie: jeśli przywiążesz wszystkie gpios do szyny zasilającej, ustawiając je jako wyjście (do przeciwnej szyny zasilającej), które mogą rozproszyć dość dużo mocy. Pin gpio jest prawdopodobnie zabezpieczony przed zwarciem, więc nic szkodliwego się nie wydarzy.
Zaprojektowanie obwodu zewnętrznego, który niszczy układ do woli, nie jest moim zdaniem banalne. Pierwszą rzeczą, jaka przychodzi mi na myśl, jest zasilacz o dość wysokim napięciu, nmos i opornik:
symuluj ten obwód - Schemat utworzony za pomocą CircuitLab Gdzie:
operacja jest prosta: jeśli mikro uwolnienia GPIOx M1 włącza się, Vcc podnosi się, a twój układ zapala się. Zauważ, że jest to kiepska konfiguracja, na przykład HV musi zostać włączony po upewnieniu się, że GPIOx jest mocno przytwierdzony do ziemi. Niektóre tranzystory mogą nie lubić VG o napięciu -5 V i tak dalej ... Ale masz obraz.
źródło
Disclaimer: supercat said that first in a comment.
Actually, it is not possible to physical destroy most MCUs, but it is possible to wear it enough to start malfunctioning to a point where it is unusable. I have experience with TI's MSP430, so here it goes:
Those MCUs allows reprogramming the whole flash at any time. Not only it is possible to wear the flash by rewriting it millions of times until it fails, but the on-chip flash programming generator may cause failure on lower-end processor if the programming generator is incorrectly configured. These is an allowed range of frequency allowed for programming. When getting outside of that range (slower), the programming time may become excessively long and cause a failure of the flash cells. After only a few hundred of cycles, it is possible to "burn" the flash cells causing permanent failure.
Also, some models allows to overclock the core so that it gets to higher speed by increasing internal voltage. The MCU runs from 1.8-3.6V voltage supply, but the core itself is designed to run at 1.8V. If you overclock too much the core on a 3.6V power rail while toggling all I/Os, activating all peripherals and running at a blazing 40MHz (normal is max 25MHz on larger models) in a small closed case, you may end up frying the core because of overheating. Actually some guys said that they achieved those frequencies (usually the DCO fails before and the chip is saved, but well... maybe).
Just try it?
źródło
According to stackexchange - "Is it really a bad idea to leave an MCU input pin floating?"
It describes several circumstances in which a chip may be damaged by an open circuit pin. Edit: an an example Spansion Analog and Microcontroller Products says:
The condition in this question is exactly open circuit pins.
So, our task is to drive that from may to will damage the pin. I think that is enough to go beyond 'bricking'.
One mechanism identified in that answer is driving an input pin to a mid-value voltage, where the two complementary transistors are both 'on'. Operating in that mode, the pin interface may get hot or fail.
An input pin has a very high impedance, and is also a capacitor. Presumably, their is enough coupling between adjacent pins that toggling neighbouring pins fast enough may drive charge onto the input pin and push it into that 'hot' state. Might half the I/O pins being driven into that state warm the chip up enough to cause damage?
(Is there a mode, where the capacitance of an open cirrcuit pin might be used like a voltage doubler? Hmm.)
I also think damaging flash is enough. I think that is bad enough to make the chip useless.
It doesn't need to be all of flash, but only the page which contains the Power-on, RESET etc vectors. The limit on a single page might take a few tens of seconds.
I had an indication, but no solid evidence) that for some MCU's it may be worse. I attended a presentation a couple of years ago. Some one asked why competitors offered parts with much higher flash-writecycles. The (large unnamed MCU manufacturer's) presenter said they took a very much more conservative approach in their flash memory specifications. He said their guarantee was defined at a significantly higher temperature than was the industry norm. Someone asked "so what". The speaker said several manufacturers products would have a significantly lower rewrite life-times than their parts at the same temps as they used. My recollection was 5x would become <1x. He said it is very non-linear. I took that to mean programming at 80C instead of 25C would be a "bad thing".
So, flash rewriting combined with a very hot chip, might also render it useless in less than 10 seconds.
Edit:
I think "releasing the blue smoke of death" is a harder constraint than required. If any of the: RESET pin circuit, brown-out-detector, power-up circuitry, RC or crystal oscillator (and probably a few other circuits) could be damaged, the chip would be rendered useless.
As others have noted, breaking flash would kill it irreparably too.
"Smoke" sounds impressive, but less obvious fatal attacks are still fatal, and much harder to detect.
źródło
One potential source of such destruction is SCR latchup, where unintended (intrinsic) transistors in a chip get together to form a kind of TRIAC which can then sink a lot of current. This can easily blow bond wires, and I've even seen plastic encased devices visibly warped because of the heat produced.
The typical cause is driving (even momentarily) an input to above or below the supply or ground rails respectively, but I guess you might see it happen if an input was left floating. And it's not then hard to imagine a circuit where the input's floating-ness was software controlled (although that would be a very silly thing to allow).
źródło
It's POSSIBLE that software intentionally written for the purpose, targeted at a very specific processor, might be able to force overclocking to the point at which the processor would overheat. Provided, of course, that the processor contains software-configurable clock-control registers.
It's NOT possible that ALL processors can be damaged this way, of course. If that were true, there'd've been billions of Z80s and 6800s and 6502s laid by the wayside by wayward software-writing tyros back when we were still typing in machine code manually, making lots of random mistakes.
źródło
This is my entry for ruining a microcontroller with as few parts as possible...
Just toggle the output pins at a few kHz!
You still might not see smoke, depending on the internal failure mode though.
simulate this circuit – Schematic created using CircuitLab
--Edit, added Aug 22--
Now, I don't think you can ruin a microcontroller with the criteria given. But you can EASILY ruin external circuitry with the wrong code. A example that comes to mind is a simple boost converter I designed recently... simply pausing the code while debugging could short an inductor to ground through a MOSFET. POOF
źródło
In terms of regular user mode code I don't think you can write anything that will break the chip.
However, I do remember the days of microprocessors that could be destroyed in less than a minute or even seconds if the heat sink fell off. Then they added thermal detection circuits that would turn the clock down if the part got too hot. Now that we're able to put in far more transistors than can be used at once, chips are capable of making more heat than the heat sink can dissipate and its the power management and thermal circuits that keep it safe. For example, see Intel Turbo Boost 2.0. Therefore it seems quite possible to melt down a chip if you're able to bypass or raise the limit on the power management and thermal circuit. So, if these are under software control (no idea; maybe it requires a BIOS update?) then you could run a bunch of parallel do-nothing loops, along with integrated GPU work, along with hardware H.264 decoding and encoding, and anything else the chip can do, all at once until the chip overheats and emits the magic blue smoke.
źródło
I'm most familiar with the STM32 processors, so these apply most to that family. But similar approaches may be possible with other processors also:
There is a permanent write-protect mode. So if you program that bit, and some useless program to the FLASH, the MCU can never be used again. I don't know if this counts as 'bricking', but it does involve a permanent hardware mechanism.
The programming pins are reconfigurable as GPIO. Because the clock pin is driven by the programming device, this could be used to cause a short-circuit. Most probably it would break that single pin, which being a programming pin would be quite bad.
Like mentioned by dirkt, the PLL's can be used to overclock the processor. This could possibly cause it to overheat or otherwise get damaged.
źródło
Who ever told that that doesn't understand how involved the design process is of such chips. That doesn't mean that slip up don't happen and that the code coverage of the regressions and corner cases sometimes miss things, but to make a statement that ALL or even most processors have this flaw is logically dubious.
Just ask yourself, what happens when an over-clocker exceeds timing requirements (assuming it doesn't overheat). the chip fails, and perhaps corrupts memory and even HDD access but fundamentally the processor will fire back up again and even run the OS again if the corruption is fixed. So what sort of properly designed microcode could possibly cause MORE disruption than this scenario? - answer very likely none.
TLDR; All processors have this fault - NOT
źródło
I believe that it is certainly possible to physically destroy a micro-controller (MC) with software. All that is required, is the combination of the MC to be executing a "tight" loop of instructions that cause 100% utilization, and a "defective" heat-sink which allows the heat inside the chip to buildup. Whether the failure takes seconds, minutes or hours, will depend on how fast the heat builds up.
I have a laptop computer that I can only use it a 50% continuous utilization. If I exceed this, the computer shuts itself down. This means that at 50% usage the MC temperature is below the set trigger point. As the usage increases, the temperature of the MC increases until the trigger point is reached. If the thermal shut down circuit did not work (or did not have one), the temperature of the MC would keep on increasing until it got destroyed.
źródło
simulate this circuit – Schematic created using CircuitLab
The code above causes the MCU to push PB2 high while pull PB4 low, and this creates a short circuit from VDD to PB2 to PB4 to GND and quickly the port drivers of PB2 and/or PB4 will fry. The short circuit may be an innocent error like accidental soldering bridge.
źródło