content-aware filling
For what it's worth, GIMP has had the resynthesizer plugin since the mid or late 2000's, and at the time it was significantly ahead of Adobe's Content Aware Fill.
content-aware filling
For what it's worth, GIMP has had the resynthesizer plugin since the mid or late 2000's, and at the time it was significantly ahead of Adobe's Content Aware Fill.
As you can see from the sign, hard drive parking had not yet been invented.
Therefore, I think they'd get out a microscope and oscilloscope and start trying to reverse-engineer it. Probably speed up the development of computer technology quite a bit, by giving them clues on what direction to go.
Knowing what something is doesn't necessarily teach people how it was made. No matter how much you examine a sheet of printed paper, someone with no conception of a laser printer would not be able to derive that much information about how something could have produced such precise, sharp text on a page. They'd be stuck thinking about movable metal type dipped in ink, not lasers burning powdered toner onto a page.
If you took a modern finFET chip from, say, the TSMC 5nm process nodes, and gave it to electrical engineers of 1995, they'd be really impressed with the physical three dimensional structure of the transistors. They could probably envision how computers make it possible to design those chips. But they'd had no conception of how to make EUV at wavelengths necessary to make the photolithography possible at those sizes. No amount of the examination of the chip itself will reveal the secrets of how it was made: very bright lasers pointed at an impossibly precise stream of liquid tin droplets against highly polished mirrors that focus that EUV radiation against the silicon and masks that make the 2-dimensional planar pattern, then advanced techniques for lining up 2-dimensional features into a three dimensional stack.
It's kinda like how we don't actually know how Roman concrete or Damascus steel was made. We can actually make better concrete and steel today, but we haven't been able to reverse engineer how they made those materials in ancient times.
Do you have a source for AMD chips being especially energy efficient?
I remember reviews of the HX 370 commenting on that. Problem is that chip was produced on TSMC's N4P node, which doesn't have an Apple comparator (M2 was on N5P and M3 was on N3B). The Ryzen 7 7840U was N4, one year behind that. It just shows that AMD can't get on a TSMC node even within a year or two of Apple.
Still, I haven't seen anything really putting these chips through the paces and actually measuring real world energy usage while running a variety of benchmarks. And the fact that benchmarks themselves only correlate to specific ways that computers are used, aren't necessarily supported on all hardware or OSes, and it's hard to get a real comparison.
SoCs are inherently more energy efficient
I agree. But that's a separate issue from instruction set, though. The AMD HX 370 is a SoC (well, technically, SiP as pieces are all packaged together but not actually printed on the same piece of silicon).
And in terms of actual chip architectures, as you allude, the design dictates how specific instructions are processed. That's why the RISC versus CISC concepts are basically obsolete. These chip designers are making engineering choices on how much silicon area to devote to specific functions, based on their modeling of how that chip might be used: multi threading, different cores optimized for efficiency or power, speculative execution, various specialized tasks related to hardware accelerated video or cryptography or AI or whatever else, etc., and then deciding how that fits into the broader chip design.
Ultimately, I'd think that the main reason why something like x86 would die off is licensing reasons, not anything inherent to the instruction set architecture.
it's kinda undeniable that this is where the market is going. It is far more energy efficient than an Intel or AMD x86 CPU and holds up just fine.
Is that actually true, when comparing node for node?
In the mobile and tablet space Apple's A series chips have always been a generation ahead of Qualcomm's Snapdragon chips in terms of performance per watt. Meanwhile, Samsung's Exynos has always been behind even more. That's obviously not an instruction set issue, since all 3 lines are on ARM.
Much of Apple's advantage has been a willingness to pay for early runs on each new TSMC node, and a willingness to dedicate a lot of square millimeters of silicon to their gigantic chips.
But when comparing node for node, last I checked AMD's lower power chips designed for laptop TDPs, have similar performance and power compared to the Apple chips on that same TSMC node.
The person who wrote it has been gone for like four years
Four years? You gotta pump those numbers up. Those are rookie numbers.
Rechargeable batteries weren't really a thing in the 70's. For consumer electrical devices, batteries were one use, and anything that plugged in needed to stay plugged in while in operation.
Big advances in battery chemistry made things like cordless phones feasible by the 80's, and all sorts of rechargeable devices in the 90's.
(latest version "Froyo")
This is Gingerbread erasure!
Sorry best I can do is a programmable turtle that moves around as a pen.
If the logic gates can feed back onto themselves, you can build a simple flip flop that can store a bit.
Trick the algorithm by reporting all MMA content too.
"actual image your camera sees" is a term that is hard to define with astrophotography, because it's kinda hard to define with regular digital photography, too.
The sensor collects raw data on its pixels, where the amount of radiation that makes it past that pixel's color filter actually excites the electrons on that particular pixel and gets processed on the image processing chip, where each pixel is assigned a color and it gets added together as larger added pixels in some image.
So what does a camera "see"? It depends on how the lenses and filters in front of that sensor are set up, and it depends on how susceptible to electrical noise that sensor is, and it depends on the configuration of how long it looks for each frame. Many of these sensors are sensitive to a wide range of light wavelengths, so the filter determines whether any particular pixel sees red, blue, or green light. Some get configured to filter out all but ultraviolet or infrared wavelengths, at which point the camera can "see" what the human eye cannot.
A long exposure can collect light over a long period of time to show even very faint light, at least in the dark.
There are all sorts of mechanical tricks at that point. Image stabilization tries to keep the beams of focused light stabilized on the sensor, and may compensate for movement with some offsetting movement, so that the pixel is collecting light from the same direction over the course of its entire exposure. Or, some people want to rotate their camera along with the celestial subject, a star or a planet they're trying to get a picture of, to compensate for the Earth's rotation over the long exposure.
And then there are computational tricks. Just as you might physically move the sensor or lens to compensate for motion, you may just process the incoming sensor data to understand that a particular subject's light will hit multiple pixels over time, and can get added together in software rather than at the sensor's own charged pixels.
So astrophotography is just an extension of normal photography's use of filtering out the wavelengths you don't want, and processing the data that hits the sensor. It's just that there needs to be a lot more thought and configuration of those filters and processing algorithms than the default that sits on a typical phone's camera app and hardware.