sábado, agosto 31, 2019

Quantum Squirunnies: “How to Teach Quantum Physics to Your Dog” by Chad Orzel

“Uncertainty is not a statement about the limits of measurement, it’s a statement about the limits of reality. Asking for the precise position and momentum of a particle doesn’t even make sense, because those quantities do not exist. This fundamental uncertainty is a consequence of the dual nature of quantum particles.”

In “How to Teach Quantum Physics to Your Dog” by Chad Orzel


1 – Wavefunctions: Every object in the universe is described by a quantum wavefunction;
2 – Allowed states: A quantum object can only be observed in one of a limited number of allowed states;
3 – Probability: The wavefunction of an object determines the probability of being found in each of the allowed states;
4 – Measurement: Measuring the state of an object absolutely determines the state of that object.

In “How to Teach Quantum Physics to Your Dog” by Chad Orzel

Fairly basic take on Quantum Mechanics. It had to be to make it intelligible to a make-believe-Orzel-disguised-as-a-dog…

It doesn't fully address the core ambiguity in the 4 postulates Orzel uses (see quote above, namely the 4th postulate): What exactly is measurement? That is, the new postulates that they propose simply assume measurement to be a primitive notion of the theory, not reducible to anything more fundamental. Orzel can’t answer the troublesome question of why measurement outcomes are unique; rather, it makes that uniqueness axiomatic, turning it into part of the very definition of a measurement. And since it does not address what the measurement process is actually doing, it also does not address the issue "Did you even succeed in measuring the thing you thought you were measuring?" It is easy to show that "quantum correlations" and entanglement have nothing to do with either spooky action at a distance or hidden variable. It is caused by the purely classical phenomenon of inter-symbol interference together with noise, associated with Shannon's definition of a single bit of information. You cannot measure two independent parameters from an entity manifesting only one such bit. This is the ultimate origin of Heisenberg's Uncertainty Principle. Any attempt to even try to perform a second measurement, is guaranteed to be corrupted by the intrinsic properties (noise and inter-symbol interference) inherent to one of Shannon's "bits".

The question of "what exactly is measurement?" is squarely addressed and answered in The Transactional Interpretation, which yields a physical (as opposed to decision-theoretic) derivation of the Born Rule (Orzel only mentions it en passant, preferring to dedicate a whole chapter to the MWI). For specifics, including calculations, see which provides an explicit derivation of the Born Rule for radiative processes) and see this too... I’m not sure the dog would be able to understand it though…loved the way Orzel explains the Uncertainty Principle by adding wave-functions and using this approach to also explain Schrödinger’s Cat.

On the other hand, there is no mystery to the Born rule. The entire process of computing a wave-function and then computing the sum of the squares of its real and imaginary parts, amounts to nothing more than computing the power spectrum of a Fourier transform. The power spectrum (as the name implies) simply measures the energy accumulated/detected within each "channel' of a filter bank. When the energy happens to arrive in discrete, equal quanta, the ratio of (total energy)/(energy per quanta) yields the number of quanta accumulated within each channel. In other words, the entire mathematical procedure amounts to nothing more than the description of a histogram which is why it yields probability estimates. Every photon, gauge boson or quantum object that appears travelling at c to us is at the same time an "observer" of that part of the universe that involves its emission-flight-detection path (or better, "process"). It feels that space-time "chunk" of the universe as a single point of existence, with no distances and no intervals involved. That's why "paths" have no sense for them, or why wave/particle duality is not resolved until detection: because time intervals (or space distances, for that matter) have no sense for quantum objects that go at c. Their single bit of existence (from their point of view) is a probability function for us, until it collapses when we detect them. But for them, that collapse happens at the same "time" they are emitted, because there's no time involved in their experience of the universe, the whole "emission-flight-detection" process is experienced at once from their POV. So the universe would "exist" the same way without conscious beings, only it will not be "perceived" the same way you are used to (the space-time ratios conscious beings create in our brains). My point is specifically that the multiple interpretations of quantum mechanics means the philosophical question of whether the world is deterministic or not is still unsolved. By neglecting pilot wave theory due to its impracticabilities is reasonable concluding that the world is fundamentally random is not. All reasonable alternatives world need to be discarded not just a few fringe models.

Of course, The Uncertainty Principle is more fundamental than the Born rule. The former arise from the logical contradiction of trying to locate a particle with non-zero size in a point in space. A particle is not located in any one point in space but in a region of space that mathematically contains infinite number of points. There is contradiction between the math we're using and our physical intuition. We remedy this logical contradiction by imagining that the location of a point particle in a region of space is governed by probabilities. The Born rule uses the square of the amplitude and it works because it represents the area perpendicular to the velocity vector of an imaginary point particle.

One last piece of advice: next time lose the dog...

sexta-feira, agosto 30, 2019

Quantum Tunneling: "The Physics of Superheroes" by James Kakalios

“One aspect of quantum mechanics that is difficult for budding young scientists to accept is that the equation proposed by Schrödinger predicts that under certain conditions matter can pass through what should be an impenetrable barrier. In this way quantum mechanics informs us that electrons are a lot like Kitty Pryde of the X-Men, who possesses the mutant ability to walk through solid walls (as shown in fig. 32), or the Flash, who is able to “vibrate” through barriers. (illustrated in fig. 33). This very strange prediction is no less weird for being true. Schrödinger’s equation enables one to calculate the probability of the electron moving from one region of space to another even if common sense tells you that the electron should never be able to make this transition. Imagine that you are on an open-air handball court with a chain-link fence on three sides of the court and a concrete wall along the fourth side. On the other side of the concrete wall is another identical open-air court, also surrounded by a fence on three sides and sharing the concrete wall with the first court. You are free to wander anywhere you’d like within the first court, but lacking superpowers you cannot leap over the concrete wall to go to the second court. If one solves the Schrödinger equation for this situation, one finds something rather surprising: The calculation finds that you have a very high probability of being in the first open-air court (no surprise there) and a small but nonzero probability of winding up on the other side of the wall in the second openair court (Huh?). Ordinarily the probability of passing through a barrier is very small, but only situations for which the probability is exactly zero can be called impossible. Everything else is just unlikely. This is an intrinsically quantum mechanical phenomenon, in that classically there is no possible way to ever find yourself in the second court. This quantum process is called “tunneling,” which is a misnomer, as you do not create a tunnel as you go through the wall. There is no hole left behind, nor have you gone under the wall or over it. If you were to now run at the wall in the other direction it would be as formidable a barrier as when you were in the first open-air court, and you would now have the same very small probability of returning to the first court. But “tunneling” is the term that physicists use to describe this phenomenon. The faster you run at the wall, the larger the probability you will wind up on the other side, though you are not moving so quickly that you leap over the wall. This is no doubt how the Flash, both the Golden and Silver Age versions, is able to use his great speed to pass through solid objects, as shown in fig. 33. He is able to increase his kinetic energy to the point where the probability, from the Schrödinger equation, of passing through the wall becomes nearly certain.”

In “The Physics of Superheroes” by James Kakalios

One problem with travelling faster than light is that there is no requirement that this only happens in a time like dimension. What if Superman is passing through every point of a particular inertial frame within the lower bound for the smallest possible electro-chemical event? In that case, Superman can be decisive over every action within a particular light cone as it propagates (except that we don't know what impact the interface between the quantum world and the classical world has, and at what point the laws switch over. Schrödinger's cat magnifies the quantum up to the large classical, but at the chemical-electrical level that you mention Superman could become subject to quantum fluctuations which would not be good for his health, never mind the control of subsequent ramifications of events in the light cone.) I believe your reliance is perfectly justified. Both Marvel and DC have embraced a form of the Many Worlds hypothesis of Hugh Everett. DC's "Crisis On Infinite Earths" story-line is based precisely on this notion. I guess I am relying on a 'many worlds' interpretation and always choosing the one where Supers prevails. From the way I understand quantum tunneling, and I do know a thing or two about quantum physics, the matter tunneling through would simply teleport onto the other side of said wall or bad guy as opposed to moving at a constant speed through the space between the two. Along with the fact that quantum physics typically only applies to members of the quantum realm (rip Mrs. Pym)

I can however pose a different theory. The amount of space both between atoms in molecular structures, as well as within atoms themselves, is massive. There is more empty space in an atom than actual atom parts. The reason that atoms can't move between these spaces, and why we can't walk through walls (I know Heinlein wrote a novel called “A Cat Who Walks Through Walls”…), is the negative change in electrons; and yes, I do know Kakalios touched upon this. The negative charges, while very weak, are like two magnets aimed negative sides at each other. It's hard to touch the two together. However... if you disabled the negative charge, there would be no problem at all moving through a solid object. But wait! The negative charges are actually what keep atoms together... and they bond molecules... so... if you disable the charges then the entire structure would basically atomize into nothingness... not fun unless you're a villain from James Bond. So here is the possibility. What if instead of fully cutting off the charge, they just have the ability to reduce it? Lower the electron charge constant within themselves to a perfect balance... right between weak enough to phase through, and strong enough to still hold the molecules together? Finding this balance would allow the physics of phase shifting to work perfectly. Actually, I think the Pauli Exclusion Principle plays a much larger role in preventing atoms from passing through each other. Even without the charge of the electron getting in the way, an atom trying to pass through the electron cloud of another atom would run into the issues of degeneracy pressure: electrons, protons and neutrons are fermions and cannot occupy the same quantum state as another fermion at the same time. Electrons are not merely particles, they exist in a superposition of their possible states in any particular orbital around the nucleus, so if the electron cloud of one atom passed through that of another, it would require those electrons to occupy the same state, because their superpositions would overlap. Quantum tunneling on the other hand, would still make some sense. As I said above, it occurs instantly, but that doesn't mean the superhero would have to phase through the wall instantly; it would make much more sense for their individual particles to tunnel simultaneously and rapidly through each atom in the material they were passing through, giving it the appearance of a continuous motion-- assuming theirs had the ability to control the quantum mechanical action of all of their atoms.

Ok. “Tunneling” is one thing. What about the rest of the Superheroes stuff? Nerd train incoming...

The problem with Superman goes a lot deeper than the points Kakalios touched on. It seems that the creators back in 1933 had a different concept for the character. As I understand it, Superman could not fly when he debuted in Action Comics #1. He could only leap over things that were beyond the range of a normal man. He had quick speed and had great strength. Not sure if he had x-ray and heat vision, or if he blew with the force of a hurricane and froze objects with his breath. Clearly he was not the god he has become. The character changed from a man that could do superhuman things, to a man who is virtually indestructible and has the power to move the planet earth when America got into World War II. The country needed a hero that the Nazis could not stop. In fiction, Superman became that hero because as you pointed out, the limits of what he could do had not been defined.

By the time the war was over, Superman and his god like powers where here to stay and with each new story, it was revealed he could do whatever the plot called for to save the day. A cap on his powers was never imposed because the writers of Superman stories for comic books, television and the silver screen knew they could bring the world to the brink of destruction and no matter how great the challenge, they could tweak Superman's powers, thus allowing him to save the day. However, I agree that a character that cannot be harmed or killed poses a problem for storytelling. As a writer, I created my own superheros and wrote stories. What I found was you had to cheat for the villains to give them an upper hand. Usually this is accomplished by:

[A] The villain gets a hold of a weapon that has the power to kill the hero;

[B] The villain creates a being with equal or slightly stronger powers;

[C] You create a weakness in the hero's armor and allow the villain to exploit it.

This is why Kryptonite was wrote into the story. The only substance that can harm Superman comes from his own planet. When Krypton exploded, rocks from the debris were thrown out into space. Some of them entered the earth's atmosphere becoming meteorites. Harmless to mortals, these glowing green meteors are highly toxic to Superman. First of all it makes no sense that something from the characters home world would have a negative effect on him/her. It has been established that the closer Superman gets to the red Krypton sun, the more his powers fade. So in theory, Kryptonite would have to be remains from the red sun in order to have a negative effect. Rocks from the planet would simply absorb our sun's rays and give off energy that Superman could use. The fact that Kryptonite is green reveals that its inventors did not intend for it to come from the red Krypton sun.

The destruction of Krypton in “Superman The Movie” (1978) was the result of the sun becoming a red giant and dying. “In Man Of Steel” (2013) careless mining of Krypton's core leads to the planet exploding. However the established conclusion is Kryptonite comes from the planet and not the sun. That's one reason why this is not a good weakness for Superman.

The problem I have with Superman is that the whole franchise is taken too seriously when in essence it's an OP dude who will always have enough power to defeat his opponent. The guy can literally fly into the sun for a recharge. Aside from his super strength he has super speed, super hard impenetrable skin, super senses, and super X-ray vision. He's practically impossible to defeat - neither by a direct assault, nor by subterfuge. And since Shad mentioned that at some point Superman gained the power of telepathy - it's impossible to plot anything against him in secret. And the kryptonite - no one can even get close to Superman with that thing since - as I've mentioned above - he has all the super senses: he can see, hear and - I'd argue - feel where the kryptonite is approaching from: he can simply use a pole or his super breath to swat it away. And yet the authors are trying to convince us that he is constantly in some sort of danger - he is not. That's why I prefer Saitame the One Punch Man - the author is aware that the hero is overpowered, and that's why the whole manga is taken light-heartedly - the hero is never in any sort of danger, he destroys all his opponents with a single punch, and he is not feeling any pride in his victories over foes that could destroy the whole planet. Instead he's bored. This type of story is a lot easier to understand, and it does not require the fans to think for the lazy authors and come up with Deus Ex Machinas to cover all the plot holes.

In a nutshell, the problem with Superman is that his powers are too great.

In the first Superman movie he turns back the clock. It may look like what he is doing is 'reversing the polarity of the Earth’s spin (he increases his relativistic mass by 13.7m times by travelling close to the speed of light' but that is just half of the story. The other is that the whole arrow of time reverses because he is flying faster than light. Since Superman has super intelligence, what he ought to do, in order to minimize crime, is to go back to whatever day his decision to fight crime became operational and micro-manage things till the best possible state of society and situation for each individual is achieved. So no more biffing super-villains, just lots of little nudges- e.g. making sure baby Lex Luthor got a nice Teddy from Santa instead of a Ray Gun.

I just wish Kakalios hadn’t limited the book just to high school algebra. Some topics required higher math to be properly addressed.

NB: Don't forget, Deadpool killed the marvel universe. Since he is totally aware that he is a fictional character, he could easily read the DC comics and movies too and kill all of them as well. Did they discover that-far from gaining superpowers from their doses of radiation, The Hulk and Spiderman would actually be bald and sterile and not long for this world? Batman has been shown escaping Darkseid's Omega beams through agility alone. Flash barely escapes those beams and Superman, himself, gets hit by them while actively trying to escape them. Batman must, on the other hand, therefore, have superpowers in terms of strength and agility. More to the point, surely Batman's cape is nowhere near big enough to enable him to glide in the first place. A real-life hang-glider would be far too big and cumbersome to allow flight between the towers of Gotham City. I suspect that a real-life Batman would have to live on Mars to be able to do what he does.

I didn't also see anything about lightsabres in Kakalios’ book...Here’s my suggestion physics-wise:

A replica handle containing a ring of lasers and reflectors and a retractable radio aerial style fencing foil/blade, extremely strong and rigid so it doesn't bend when swiped or on contact with solid objects (similar to an extendable baton). Attached to the end of the foil/blade is a small reflector disk, facing back towards the handle. When activated the foil/blade smoothly extends to its full one metre length in approx 0.25 seconds. Simultaneously a series of lasers are emitted from the handle towards the reflector disk on the end of the foil blade creating loops of lasers and the appearance of a solid, rounded blade. The rapid extension of the light blade as it is switched on would be visible to the naked eye as it would be a mechanical process rather than the instant switching on of a laser. The reflector disk on the tip of the blade would be externally illuminated to the same colour and intensity as the laser blade to disguise it. The inner foil blade would be largely protected from damage in combat by the laser blade surrounding it, which would do the cutting. Authentic sound effects would be motion or contact activated. All you need to do now is sort out how to power it, wireless is not possible but add a long cable hidden up your shirtsleeve/down your trouser leg and a power source and you could probably make one good enough to pop balloons now. I think I need a lie down. Suspend the reflector in a magnetic field, a metre away from the handle - no need to have an extending baton. Switching the device off causes the field to shrink, returning the reflector to its housing at the end of the handle. The laser would cut anything they came into contact with, but the magnetic field would repel the field of another sabre, thus appearing solid when striking the beam of another sabre. That's how it worked in my back-of-an-exercise-book doodles in the late '80s. Unfortunately, blasters and lightsabres have a bigger problem than accuracy, and that's the fact that they vary so much in how much damage they do, without ever being adequately defined. Sometimes (when it's a main character) blaster bolts can just give a nasty burn, other times they're insta-kill. Don't even get me onto the purpose of armour in star wars, when shots to armour are from what I've seen, far MORE deadly than shots to unarmoured characters..

quinta-feira, agosto 29, 2019

FaceApp: "Fake Physics: Spoofs, Hoaxes and Fictitious Science" by Andrew May

“For a SF writer, or anyone else, to produce ‘fake physics’ that might even fool a professional physicist, it has to look much more like the real thing.”

In “Fake Physics: Spoofs, Hoaxes and Fictitious Science” by Andrew May

“Technically Quantum Theory is a branch of physics, but it’s quite unlike any of the others. It doesn’t involve any authoritarian ‘laws of physics’ that you’re not allowed to break. Relativity says you can’t travel faster than light. The Second Law of Thermodynamics says you can’t have perpetual motion. Quantum Theory, on the other hand, says you can do anything you like. [...] It’s based entirely on jargon, which you can use to mean whatever you want it to mean. A few examples are: ‘Nonlocality, ‘Entanglement’, ‘Wave-Particle Duality’, ‘Hidden Variables’ and ‘the Uncertainty Principle’. Feel free to use these terms in any way you want: no-one else understands them any better than you do.”

In a spoof story called “Science for Crackpots” reprinted in “Fake Physics: Spoofs, Hoaxes and Fictitious Science” by Andrew May

“Is the Yeti the same species as Bigfoot or a different one? Does the effectiveness of telepathy depend on the distance involved? Why does the temperature drop when a ghost is in the room? How many members of the US congress are shape-shifting reptiles? If mainstream science [as opposed to crackpot science] addressed questions like these, people might start taking it seriously. ”

In a spoof story called “Science for Crackpots” reprinted in “Fake Physics: Spoofs, Hoaxes and Fictitious Science” by Andrew May

Do you believe FACEAPP App really ages you? (*)

Do you believe in Climate Change? The problem is what you mean by it. This illustrates very nicely the issue with science, verification of results, the different kinds of bad science, and the relationship of science to public policy. Who are the deniers? What do they deny? And how do you think ruling out fraud from the key papers is going to enable you to purge them? Made up data is the least of our problems with climate science. I know of only one alleged instance, where a series was allegedly illegitimately extended by simply infilling. The problem comes when there is real data, in the form of lots of proxy series, but one picks only those which show what one wants. Is this fraud? Or is it bad judgment? Or is it perhaps a legitimate attention to some very disturbing and important data? One then picks a statistical treatment which is not recognised as being legitimate or optimal. Is this fraudulent? Probably not. One is not a statistician, the study has passed peer review. People may disagree, but so what? People who are sceptical about the merits of dramatic CO2 reduction programs, the Paris Agreement, Wind and Solar, high values for the Climate Sensitivity Parameter are very rarely alleging fraud. What they are arguing is that some scientific propositions have not been shown to be correct, and that the public policies supposed to react to them are not fit for purpose even if they were.

The problem from bad science flowing into public policy seems to me many times greater and more costly than the problems from fraud. And the result is that even if you hit fraud in the way the article suggests, you will still be left with the problem of bad science. The solution is better, more critical, peer review. More and more prompt disclosure of data and methods. And more critical coverage in the generalist press. And less talk of purging the unbelievers!

Climate science makes pretty much all of its raw data available for just this kind of purpose in fact: and model code is typically either open source or source-available (i.e. you have to sign something and what you can use it for is restricted but it does not cost money). *Running* models is harder because their configuration is extremely fiddly and usually dependent on details of the computational environment they live in. Also you need significant computational resources to do anything non-trivial.

I've done some private work on making tools to post-processing tools in fact, and I'm generally interested in this area. There is a lot of work to do to make things more accessible, inevitably, and a lot of the technical problems are not well-understood by people with science backgrounds. The deniers do not like this as more data, more easily processed, is very harmful to their interests. There has been a bad history of datasets becoming unavailable under denialist administrations in the US, and this seems likely to happen (and may already be happening) under Trump. This vanishing of data by denialists is reasonably terrifying (very terrifying in fact) as it makes it hard to dispute their lies, which is the point of course.

What about peer review you may ask? Ah yes, Peer Review, the practice of prestigious journals asking scientists to edit and review research articles for free, then charging their universities for access to that research. The practice were scientists submit papers to journals edited by their colleagues who ask friendly colleagues to review them favourably. Peer Review isn't worth much. Transparent and reproducible analysis that can be scrutinised by anyone, including algorithms, is the way forward.  As in all areas of human activity there will be those who are unethical, but for the most part scientists and editors have high integrity. The major problem in my experience is not fraud but incompetence, compounded with protectionism. The protectionism is a bigger issue which also needs unpacking. Using the wrong tests is actually a much more important source of error than simple mathematical errors and this more important problem will not be picked up. I agree that transparency is critical, but this kind of "vigilantism" will not generate the desired outcomes. We need mandatory Open Data, open data sharing infrastructures, and competent reviewing. Fresh air is an excellent disinfectant and we dont need antiseptics.. at least not yet. Most of the time this works well, but there are multiple cases where "bad science" gets through. There are also folks who announce results that haven't been through peer review. To give a few examples:

Árpád Pusztai who stated that his research showed feeding genetically modified potatoes to rats had negative effects on their stomach lining and immune system. Pusztai's experiment was eventually published as a letter in The Lancet in 1999. Because of the controversial nature of his research the letter was reviewed by six reviewers - three times the usual number. One publicly opposed the letter, another thought it was flawed, but wanted it published "to avoid suspicions of a conspiracy against Pusztai and to give colleagues a chance to see the data for themselves," while the other four raised questions that were addressed by the authors. The letter reported significant differences between the thickness of the gut epithelium of rats fed genetically modified potatoes, compared to those fed the control diet. The Royal Society of Medicine declared that the study ‘is flawed in many aspects of design, execution and analysis’ and that ‘no conclusions should be drawn from it’. For example, too few rats per test group were used to derive meaningful, statistically significant data.[11]

Jacques Benveniste who published a paper in the prestigious scientific journal Nature describing the action of very high dilutions of anti-IgE antibody on the degranulation of human basophils, findings which seemed to support the concept of homeopathy. The controversial paper published in Nature was eventually co-authored by four laboratories worldwide, in Canada, Italy, Israel, and France. After the article was published, a follow-up investigation was set up by a team including physicist and Nature editor John Maddox, illusionist and well-known skeptic James Randi, as well as fraud expert Walter Stewart who had recently raised suspicion on the work of Nobel Laureate David Baltimore. With the cooperation of Benveniste's own team, the group failed to replicate the original results.

Gilles-Éric Séralini who published a paper in Food and Chemical Toxicology in September 2012, the article presented a two-year feeding study in rats, and reported an increase in tumors among rats fed genetically modified corn and the herbicide RoundUp. Scientists and regulatory agencies subsequently concluded that the study's design was flawed and its findings unsubstantiated. A chief criticism was that each part of the study had too few rats to obtain statistically useful data, particularly because the strain of rat used, Sprague Dawley, develops tumors at a high rate over its lifetime.

Fraud exists, poor reviewing exists, incompetence exists. A study of these high profile cases can help understand weaknesses in the process of publication and dissemination, and I think we learn from them. What's been quite heartening is that the increased concern - interestingly kicked off by the Pharma industry when they couldn't reproduce experiments from the literature - is now driving improved policies and adoption of mandatory open data provision. This has lead to increasing awareness in the scientific community and I hope that will improve things more. The need to work towards good practice everywhere is evident but thats no reason to damn the whole of the scientific enterprise as flawed and fraudulent. Have you ever herad of "p-hacking"? If you have ever done any real data analysis you will know that you try many things. You use different variables, different transformations of those variables, different sample selections, different models. The model you publish is obviously not the model that showed anything but the model that shows something interesting. But the procedure above invalidates p-values, they then no longer protect against finding spurious findings 95% of the time, if they are less than 5%. This is why so many results in science do not replicate.

Incidentally though, Asimov's 'Thiotimoline' story wasn't intended as SF. He wrote it when he was writing his PhD thesis, and was worried he wouldn't be able to fit in with the obligatory fairly turgid style - so it was a kind of spoof writing exercise. He showed it to an editor (John Campbell as Andrew May’s correctly states but without telling the all story) who liked it and wanted to publish it. Asimov agrees, but only if a pseudonym was used, as he didn't want the Doctoral committee assessing him to think he wasn't taking it seriously. To his horror it was published under his own name. Though all turned out well. Apparently the final question in the viva voce asked him to discuss the properties of his imaginary substance thiotimoline - and Asimov collapsed into laughter, realising they wouldn't have done this if they weren't going to pass him. There is so much stuff called science fiction today we need a method of rating the science in the fiction. But so many readers do not care and the viewers are even worse. And most of the media just cares about collecting eye balls.

How about archetypes for comparison to rate the science?

#1. Cat and Mouse by Ralph Williams

#2. The Servant Problem by Robert F. Young

#3. Omnilingual (Feb 1957) by H. Beam Piper

#4. All Day September by Roger Kuykendal

#1 was nominated for a Hugo but lost to Flowers for Algernon so it should not be bad but it says nothing whatsoever about the "science" or "technology" enabling the story. An alien just makes things happen as though by magic;
#2 is unusually similar to #1 in that the technology driving the story performs the same function but the writer offers a kind of explanation mentioning mobius loops and has a little astronomy;
#3 is a mixture of speculation about future technology combined with considerable discussion of real science regarding physics and chemistry;
#4 is strictly hard SF and contains nothing likely to be impossible at some time in the not too distant future. It is in fact curious in that it is a Moon colony story 10 years before the first Moon landing in 1969 and a prospector finds water on the Moon which was actually found in October of 2009. The story also has a little chemistry. It brings to mind Arthur C. Clarke's "A Fall of Moondust";
#5 one of the characters attempts to get near light-speed using inertia-repressing technology and ends up with a ship full of pulverised goo where her crew used to be.

(*) All the imaging and algorithmic tools are based on Convolutional Neural Networks. ConvNet is just the machinery to interpret localized features (pixels, superpixels and so on). It seems these features are similar to textures but they are not exactly the same. This beautiful paper is about visualization of what is actually learned (memorized) about images based on this technology:

I think it is important for you to know how the machinery behind it actually interprets pixels of images.
FaceApp is most likely based on end-to-end trained networks. In other words, they do not have to tell the algorithm "this is nose wrinkles, this is smile lines" etc for each component. They just throw millions of faces, young and old, at it. The algorithm learns the underlying patterns, by asking it to decide if the output is a real old person, or a fake old person. I do not know FaceApp's actual algorithm, but I'm pretty confident it is based on CycleGAN, DiscoGAN, or similar. There's a whole zoo of Generative Adversarial Networks nowadays. So, the answer to the question is no. They just store your pictures somewhere else and they throw FACEAPP's "algorithm" at them to get at your aged self...
For fun I uploaded the photo which depicts my hangover after two days of binge drinking in the application and it gave very weird results: my older face was very similar to Boris Johnson's. Is there anything wrong with me? My bathroom mirror has a similar app. Whenever I look at it I see an old bloke with whitish beard and a lined and jowly face instead of the fit thirty year old sex god that I still am mentally.

SF = Speculative Fiction.

terça-feira, agosto 27, 2019

Dr. Strangelove in Colour: "House of Suns" by Alastair Reynolds

When I was in high-school in the 80's, a bunch of hippy drama students came in and told us we were going to travel through time. They took us into a tent which had been set up in the school hall and told us to close or eyes and concentrate for a minute. When that time was up, they took us out of the tent and started expressively wandering around the school hall, exclaiming 'Oooh look - a tyrannosaur!' and 'watch out for that heard of brontosaurs!'. We stood there, utterly non-plussed. That’s “House of Suns” for you.

I'll be 99 in the future (If I don’t die first), and can't really say I have much in the way of false memories. Either I remember the details quite clearly, or I don't remember them at all. There are about 1,200 to 1,500 pictures that my family took from the time I was born til I was 18, or about 3 rolls of film a year. Before I was about 5 years old, I only remember a handful of them being taken. (One of these was were I was trying to grab the lens and my mom took the picture right then). Other things like my grandparents house that I wasn't in since I was 8 years old, and almost no pictures of, I could draw an exact floor-plan of the place. But on the flip side, my memories have mostly faded, and other than a couple memories a year from all the time in school are all that is left. I live a life where nothing really sticks out and every day is pretty much the same as the last. I think memories get rather compressed together and with some triggers, could probably pull them out, but trying to think about them without pictures or tangible things to hold on to (Like my CD collection; even though most of them came to me from 1990 to 1995, I could probably tell you where I got most of them, a general order of when I got them, and if I got them new, used, or mail order) - Wow...like this one CD I bought in Valentim de Carvalho in Lisbon, on sale for about 4 or 6 Escudos (our currency before the Euro came along) back in April or May 1990 or whatnot. But I couldn't tell you about most things I got 5 years ago....or when I even got them. Much of "memory" is superimposed upon events that everyone involved in the "memory" remembers it differently. It can be highly unreliable, particularly as a factor in legal situations where people have been executed the largely on the grounds of it. The most dangerous is the person who maintains they have an infallible memory. I clearly, distinctly, and in detail remember seeing, when it first came out, "Dr. Strangelove" in colour. A number of years later I was told it was a black and white movie. I did not believe it. Even after researching the movie and having to admit I was wrong, some corner of my mind still believes I saw that movie in colour. The best way to put it is that memory can be vague and is often elastic. It is easily distorted by a preferred versions of events and that is even without the person recalling events acting in-authentically. If emotional influences can be eliminated, the elastic version will return to its truest form, but even then it is far from infallible. That is a bit like learning how a car works by removing bits. It reminds me of the school pupil who was in a nature class. They were studying a spider and discovered that when they shouted at it, the spider ran away. So they reported this to the teacher. Some little time later the teacher went to the pupil and asked what they had now learned. The pupil replied "When you remove a spider's legs, it goes deaf".

"House of Suns" brims with ideas, and this is both a strength and a weakness. It left me wanting—needing—more, another volume just so we could continue meditating on so many of these ideas. I loved the not-reliable-memory aspect of it, the AIs' solution to the causality problem, as well as Palatial, which got really spooky with Abigail's poor playmate. My favourite parts were the book’s sections with Abigail Gentian. The exploration of her initial "shattering" and how it relates to our sense of who we are as individuals struck a chord for me. The last chapter is still totally cliched: "’Brutus, we Must get out of this frigging boat’ race against time!’", undermined my enjoyment of this post-human novel. I like Reynolds' stories best when the human tech is at its wondrous peak, so “House of Suns” was a great read for me. The stories "Diamond Dogs" and “Turquoise Days” are my favourite (his two collections “Galactic North” and “Beyond the Aquila Rift” are worth checking out; both of them are above the usual crappy SF fodder we see bandied about nowadays). “House of Suns” along with “Redemption Ark” and “Slow Bullets” are his best novels so far.

segunda-feira, agosto 26, 2019

Religion-in-SF: "Redemption Ark" by Alastair Reynolds

The threat of the inhibitors reappears with all its danger in “Redemption Ark”, leading to the total extinction of humanity as it happened in the remote past with the rest of the intelligent cultures that tried to spread across the galaxy. The weapons contained in “Nostalgia for infinity, the ship of the ultras that already appeared in “Revelation Space”, continues to orbit the planet Resurgam, acquiring vital importance, to the point of provoking a bloody race in its pursuit to ensure its possession in the face of the coming war. Different factions of the divided humanity in war, will try to ensure its control, which will cause various space clashes where the author shows once again a prodigious imagination. Meanwhile the inhibitors, or the wolves, as they are known by the faction of the Combined, undertake in the solar system of Resurgam their quiet genocidal task of titanic proportions, which will lead to consider the evacuation of the planet, with scarce means and little cooperation from the government and the population.

I'm not sure about the universe being indifferent. We live in it and its laws are what we have adapted to. Conditions change here on Earth as well as round the Universe but we would still have to adapt to the same laws where ever we were. If we look at science now the new frontier isn't so much the material universe,but the mathematical. The visible could be described as a tidal wave of probabilities painting across a moving canvas (to mix metaphors). At this level we can ask is that tide indifferent or is it full of intent. On a human level I would suggest that its very hard to answer that question because we cant be inhuman. In other words even our attempts at having no intent are part of our intent. I'm reminded of the orange. It just happens to be the right size to be eaten and the pips spat around. Was it intent that produced a fruit that feeds others in order the orange itself can propagate? What of the rules that produced this convenient arrangement and the unlikely events needed to bring it about? Do these speak of intent? In my view there are two masters in the Religion-in-SF field today, Gene Wolfe and China Miéville. Miéville agrees about Wolfe. I don`t know if Wolfe thinks the same about Mieville. Strikingly Mieville is an atheist and Wolfe a Catholic. What I like about both of them is their openness to the literally infinite range of possibilities for the human, post-human and alien. The sense that the universe is not just stranger than we know but stranger than we can know. Which is also why I think Tarkovsky`s “Solaris” and “Stalker” are full of mystery - in both cases we are faced with something absolutely beyond anything we can even name let alone understand. That sense of astonishment and bewilderment can bring with it an understanding that our daily mundane existence is also astonishing and bewildering and full of infinite possibilities. In “Stalker and “Solaris there are no special effects at all. But the earth and the sea are transformed into alien places simply by being closely and minutely observed. And “Stalker” ends with a tiny, un-noticed, trivial miracle, an almost imperceptible intrusion of ??? God??? the Beyond??? Real Reality??? The Akien ??? into our world. Stalker seems to me to capture the sense of the absolute otherness which is required for a real concept of the Divine. It does this without any special effects or CGI. The film reminded me of the lines from Rilke`s Duino Elegies: “Denn das Schöne ist nichts als des Schrecklichen Anfang, den wir noch grade ertragen, und wir bewundern es so, weil es gelassen verschmäht, uns zu zerstören.” (For beauty is nothing but the beginning of terror, which we are still just able to endure, and we are awed because it serenely disdains to destroy us.) The Stalker himself is a man like Jeremiah, a man broken by his encounter with something real but beyond words and names. The film shows us a postindustrial landscape literally transfigured by the fact of observation. I loved the fact that the aliens - if that is what they were - came and left and changed everything and said nothing. The final scene, the tiny un-noticed "miracle" performed by the Stalker`s child is a moment of pure perfection. Reynolds SF stands sort of between Wolfe and Miéville. Reynolds tries to create a universe full of mystery, and usually leaves it up to the reader to imagine the reality behind the veil. Religion in SF is a fun topic, and much misunderstood. Whether or not you see religion and science as at odds, SF is a fertile toolkit for exploring religious and religious-studies themes.

Alastair Reynolds successfully tries to create a universe full of mystery, and usually leaves it up to the reader to imagine the reality behind the veil. Probably the best Revelation Space novel of the bunch. Last but not least, we're faced with the unfathomable "doorstopper effect" that distorts space-time and causes novels of 300 or 400 pages ending up as huge tomes that barely fit on our shelves. In this case, the "doorstopper effect" was moderately strong and the novel ended with almost 800 pages (!). A shame.