quarta-feira, setembro 04, 2019

Tortoiseshell Ray-Bans: "Adventures of a Computational Explorer" by Stephen Wolfram

“’You work hard...but what do you do for fun?’ people will ask me. Well, the fact is that I’ve tried to set up my life so that the things I work on are things I find fun. [... ] Sometimes I work on things that just come up, and that for one reason or another I find interesting and fun. [...] It [ the paradigm for thinking] all centers around the idea of computation, and the generality of abstraction to which it leads. Whether I’m thinking about science, or technology, or philosophy, or art, the computational paradigm provides both an overall framework and specific facts that inform my thinking. [...] I often urge people to ‘keep their thinking apparatus engaged’ even when they’re faced with issues that don’t specifically seem to be in their domains of expertise.”

In “Adventures of a Computational Explorer” by Stephen Wolfram

“The real payoff comes not from doing well in the class, but from internalizing that way of thinking or that knowledge so it becomes part of you.”

In “Adventures of a Computational Explorer” by Stephen Wolfram

What do “Arrival”, “Gödel’s Incompleteness Theorems”, TCE (Theory of Computational Equivalence), Theory of Computational Irreducibility (TCI), AI, Coding, ..., Physics (e.g., Quantum Mechanics], Computer Science have in common? Stephen Wolfram.

Back in the day as I was attending university there was a turf war between those who used Mathematica and those who used Mathlab. I didn’t side with the crowd that thought Mathlab was better than Mathematica. And I’m talking about Mathematica 2.0 (the version that run on MS-DOS). Why did I choose Mathematica and not Mathlab you may ask. At the time people often got surprised with the grades I was getting in math subjects: Calculus, Linear Algebra, and others of the sort, you can use Mathematica for. Even after I finished college, I remained a die-hard Mathematica aficionado. I couldn't even consider any package without symbolic capabilities and Mathematica had plenty of those to go around! You could just do so much more symbolically than you could if you were constrained to numerical routines and that’s what Mathlab was for in my view at the time. In retrospect I’d say Mathematica and Mathlab are designed to do different things, i.e., what you pick should depend on your intentions. You could do some things with Mathlab that Mathematica is better for and vice-versa, but the process of doing so and the result will suffer more than necessary. It’s up to you really. Calculus I, II, II, and IV, Linear Algebra I and II, Thermodynamics, Mechanics I and II, Nuclear Physics, Numerical Analysis II and II, etc. Those were some of my Course Subjects in college. I could handle lists and matrices easily, plus all the best mathematical functions were there; nowadays the Mathematica 2.0 that I knew and loved has come a long way: extremely sophisticated graphics visualizations, that allow me, for instance, to make and visualize an animated gradient descent, animate different weights for a given neural network, choose a specific ML algorithm and automatically classify the data-set in classes, plot stunning 3D visualizations, make animations and manipulate variables on the run at the same time I see the results of the outputs. The version I just checked out even comes with all libraries integrated! It's a great software and a great symbolic language; if you want to be serious in ML and you know the formulae for the algorithms, you can build them from scratch, in a completely customized way, i.e, you're the master of your own destiny! You can also do face recognition, geolocation of objects with 3D plots of map surface, handle cellular automata like any other and develop social networks models with AI completely customized. You can even develop all kinds of DIY projects. As Wolfram states: “As of now we are up to Mathematica 12, with nearly 6000 functions and counting.” I just wish I was still in college...as soon as I don't have something better to do with my time, I'll post something regarding one of my projects using Mathematica.

I'm not sure how many will read a book like Wolfram’s. The anti `expert`bias coming from some sectors of society cannot be overlooked, unless you accept that you need to learn things from people who understand them better than you do, there is no point in even trying to learn new concepts. It seems to me that the more innately ignorant a person actually is, the more they hold firm to their own ignorance. Until you want to learn new things, you never will & most people prefer not to be challenged in what they want to be true. If you tell them they are simply mistaken they are annoyed, if you go on to explain why they are wrong, they just get angry & defensive. They defend even outright lies, with ferocious single minded determination. They seem to believe that being entitled to their own opinion, implies that they are entitled to their own facts: they are not...Well, as the saying goes, most people are fuckwits. Wolfram can talk and write about anything through the lens of knowledge, behind the lenses of his prescription tortoiseshell Ray-Bans, and make it come alive, with some clever jump-cutting in post-production. What more can I ask for? At least Wolfram does not pretend to be a prophet à la Kurzweil; that would unbearable. I much prefer Wolfram the way he is. In a universe where so much of our pop-science books are utter facile pablum, this book is a breath of fresh air. Whatever happened to substance and respect for mind from whatever corner of the globe? I refuse to be defined by my clothes, hair or the pigment of my skin. It might be out-of-date now, but that was what we were taught as kids in the 1980s, to question absolutely everything with the slightest semblance to authority or group mindedness, uniforms, appearance, the outer accoutrements of mere identity were never to be what defined us. Did they teach us wrong? I do however recall there being a kind of "intellectual" remit that seemed to slowly evaporate (loss of nerve in wider populist competition). Was it really radical? Only to the extent academia and arts will always challenge convention. So radical enough? Not for some. But substantial certainly. There might be another explanation for the loss of "intellectual" edge, and that is simply that had a certain edge going out of fashion nowadays. Yes, Beckett was a great (the most profound) comedian. But you can't keep depressing that peddle. Letting go the intellect might in a sense be almost a Daoist wisdom. Life is so short after all. My own view on this is actually ambivalent and subject to contrasting voice. Nobody ever knows anymore more than a tiny subset of human knowledge, possible the last who ever did - or ever will - was Newton and Leibniz? This means that every single one of us has almost limitless vista`s of ignorance inside them, wisdom begins only when we realise that fact. Only rather a lot of people think `expert` is a deep insult, it is simply not. Today's pop-science books audience gravitate towards the Great Bake off, The Apprentice, Love Island. Why? Not because people are stupid but most alternative ideas, on a popular level, don't make sense in a world that fundamentally doesn't look like it's for turning, or indeed that can it be turned. Hence, the audience for pop-science books that pass time rather than ones that 'overthink' things: thinking pop-science books arise when there is appetite for change generally. The question is the: "do we want to change?" The same happens with the Tube.

I was one of those who delighted in watching Open University programmes early in the morning, even if I didn't know much about calculus or differential equations at the time. I preferred the sociological and historical units, but the science/maths lectures and demonstrations could be fascinating - and, of course, worth it for the hang-glider collars and fractal patterns on those shirts. (I may have learned more about aerodynamics and the maths of chaos from the clothes than the lectures!). The golden age of the radio sage was before my time (1930s-1950s); they still represent the dying embers of what once was in that medium and while philippics and jeremiads are easy and tempting to produce on here ('Woe, woe and thrice woe, we are undone' etc, etc), there is something less about modern intellectualism. Pop-Science has certainly dumbed itself down. Where's all the genuine investigative science journalism gone? Too many professional science intellectuals writing appalling stuff nowadays and they're all shit anyway. The mediocre-minded might rail about consistency or moral or intellectual clarity but they what they just write is proper comedy. Deft and ludic. Spectrum scarcity is over. Internet delivery means the old gatekeepers (e.g., pop-science publishers) have been largely dis-empowered and can no longer get away with giving a mass audience stuff they don't want...

Loved Wolfram's take on QC. How a feasible QC would operate, and if anyone could make it work it would have to be Prof Hayden and all. My best guess is a format that resembles the natural Quantum Fields Mechanism in the first few fluctuations of probability dominance dimensions using a synchronized/modulated matrix of frequencies that identify each dimension and can be read from scattered interference. The photonics design that has been suggested seems likely? Is this why Bose-Einstein condensates are "hot"?

The naturally occurring QC is "everything everywhere all at once" connection and the readout is cause-effect, so the answer to the question put in is the question of the answer. Programming is not a repeat-cloned possibility so all possible paths of inquiry have to be explored simultaneously, which is probably the explanation of the question-answer of "what is life"?, life is the question of self sustaining continuity and not only is novelty and differences natural and normal,  life ceases by accumulated errors if change is not pursued. At this level it is the instinctive requirement of curiosity. By these deductions, this is essential research, but any device is most likely an enhancement of the human mind, the centaur model?

There is also no reason to allow yourself to be irritated by Wolfram's mannerisms if I may call them that. This guy is cracking the codes of life, physics and the universe. I've never read such an idea-dense book of this length before. Let the profundity of content drown out the minor distractions. Scientists like Stephen Wolfram have never claimed to know all the answers. If they did, they'd be out of a job. Although what's equally funny is that those who use the phrase "science doesn't have all the answers" cam rarely point to a question where somebody in some other discipline actually does. That’s what this compilation of his blog essays are all about: questions.

Also loved the chapters on Wolfram’s Personal Infrastructure (some nice juicy hacks), “scientific Bug Hunting in the Cloud” which highly resonated with me, because I also did some of that stuff for a living when I was SAPSYSAdmin back in the day. Wolfram is a hacker at heart that also became a CEO...

NB: Too bad Wolfram didn't apply his Cellular Automata to QM (he’s got a sub-chapter “Reversibility, Irreversibility and More” in this book but the content is very light; maybe he made a full approach in his “A New Kind of Science” but I haven’t read it). I'd have liked to read his take on it. Is there a difference between both? My take is that there's one simple difference: automata theory is deterministic and global while quantum theory is nondeterministic and local which makes quantum theory a specific case of automata theory the way special relativity is a special case of general relativity. And one simple similarity between quantum theory and automata theory is that they are both topologically invariant and variant respectively through proper time configurations as special relativity  and general relativity are morphologically invariant and variant through space-time manifold. The universe is governed by a SINGLE probability wave function and the quantum field of the virtual particles (empty space) is the basis of all physical reality, that define entanglement, entropy and can self simulate intelligent conscious 'observer' that collapses the field into real particles (matter). The real matter is fine tuned to self-organize and self-simulate a quantum computing function of thousands of qubits, self-error correcting all the systems and all the processes, from how a planet revolve around their star, how photosynthesis produce food for all plants. The QC is triggered by the single probability wave defining the infinite dimensional QC function. What we need is an algorithm that enable the QC to perform all the processes in the universe, eliminating randomness and chance, producing life out of non-life matter. Life is also a QC (all five senses, our brain and all our cells act as a QC), repairing/regenerating 50-70 billion damaged cells daily at 99.99 % efficiency and at lightning speed, or producing protein as required. 

segunda-feira, setembro 02, 2019

Non-Over-Wrought Fiction: "A Dangerous Man" by Robert Crais

Crime can be just as much 'literary' fiction as anything else. Granted that much is little different from watching the telly but people get high falutin about TV series these days and I'd simply rather read than watch, mainly. And its rarely up its own arse or boring... and even stuff that’s not brilliant can be enjoyable, like 'A Dangerous Man', which is more than can be said for much of the 'over-wrought' stuff that gets so lauded as literature...And at least if you read rather than watch you don’t have to worry about the ending being changed - yes, I’m talking about you "Ordeal by Innocence". Oh the conceit of an adaptor who thinks they know better than the author. Dismissed as 'genre', crime fiction more than a generation ago surpassed literary fiction as a threat to the forests. Today it is so amorphous that there really is little if anything --not even crime-- to link the 'hardboiled' novel with the fat old lady with cats 'cosies'. It is amusing to read the one-star Amazon reviews of recent crime fiction, where books are damned for including 'violence' and 'language'. Crime fiction has had huge influence on literary fiction from Camus to Murakami. As a form it’s incredibly flexible and the perfect vehicle to investigate the working of society be it race, class or sexuality. Ross McDonald, Elroy, chandler, Hammett, James M Cain, Highsmith, Dorothy Hughes, Chester Himes, Walter Moseley - the list is impressive and endless. And that is leaving out Christie and her acolytes and Scandinavian Noir. It’s no surprise that he has powered past so much pretentious and irrelevant literary work. Read 'The Postman Always rings twice' and then read The Outsider. This is not something I'm making up this is something that the article above alludes to and Camus said himself. He was influenced strongly By Cain. Sartre was also strongly influenced by crime fiction in the writing of Nausea. The book 'Looking for the Stranger' by Alice Kaplan, a well-regarded authority on Camus, also mentions the influence of 'The Postman Always rings twice'. Congrats to those who chose "A Dangerous Man" as a crime novel; a lead pipe in the library and a raspberry to those who chose a work of literary fiction which could in no way be defined as a crime novel. After having read more serious stuff lately (6 books on Quantum Physics and the like) I felt the need to lighten up. 

Should I go down the Mundane or the Crime Fiction Path I asked myself. I gave up with "literary fiction" chiefly because most "literary novelists" write tedious drivel that gets extravagantly overpraised in the press, being reviewed by their backscratching mates inside the tiny cosy literary scenes. After some deep thought, I decided on the latest Crais: “A Dangerous Man”. What’s better than tackling a Crime Fiction novel by none other than one of the so-called Masters of the form? What have I got to report after having finished it? Not much, but I liked the Mystery. A Crime Fiction novel with a good mystery at its centre can still work if the writing is bad as long as the mystery works. Obviously good writing is better but in this kind of book it’s optional. On the other hand, if the mystery is nonsense then no manner of finely tuned phrases are going to cover it up. In the case of the “A Dangerous Man” the murder mystery is quite passable and there isn’t much gruesomeness.

sábado, agosto 31, 2019

Quantum Squirunnies: “How to Teach Quantum Physics to Your Dog” by Chad Orzel

“Uncertainty is not a statement about the limits of measurement, it’s a statement about the limits of reality. Asking for the precise position and momentum of a particle doesn’t even make sense, because those quantities do not exist. This fundamental uncertainty is a consequence of the dual nature of quantum particles.”

In “How to Teach Quantum Physics to Your Dog” by Chad Orzel


1 – Wavefunctions: Every object in the universe is described by a quantum wavefunction;
2 – Allowed states: A quantum object can only be observed in one of a limited number of allowed states;
3 – Probability: The wavefunction of an object determines the probability of being found in each of the allowed states;
4 – Measurement: Measuring the state of an object absolutely determines the state of that object.

In “How to Teach Quantum Physics to Your Dog” by Chad Orzel

Fairly basic take on Quantum Mechanics. It had to be to make it intelligible to a make-believe-Orzel-disguised-as-a-dog…

It doesn't fully address the core ambiguity in the 4 postulates Orzel uses (see quote above, namely the 4th postulate): What exactly is measurement? That is, the new postulates that they propose simply assume measurement to be a primitive notion of the theory, not reducible to anything more fundamental. Orzel can’t answer the troublesome question of why measurement outcomes are unique; rather, it makes that uniqueness axiomatic, turning it into part of the very definition of a measurement. And since it does not address what the measurement process is actually doing, it also does not address the issue "Did you even succeed in measuring the thing you thought you were measuring?" It is easy to show that "quantum correlations" and entanglement have nothing to do with either spooky action at a distance or hidden variable. It is caused by the purely classical phenomenon of inter-symbol interference together with noise, associated with Shannon's definition of a single bit of information. You cannot measure two independent parameters from an entity manifesting only one such bit. This is the ultimate origin of Heisenberg's Uncertainty Principle. Any attempt to even try to perform a second measurement, is guaranteed to be corrupted by the intrinsic properties (noise and inter-symbol interference) inherent to one of Shannon's "bits".

The question of "what exactly is measurement?" is squarely addressed and answered in The Transactional Interpretation, which yields a physical (as opposed to decision-theoretic) derivation of the Born Rule (Orzel only mentions it en passant, preferring to dedicate a whole chapter to the MWI). For specifics, including calculations, see which provides an explicit derivation of the Born Rule for radiative processes) and see this too... I’m not sure the dog would be able to understand it though…loved the way Orzel explains the Uncertainty Principle by adding wave-functions and using this approach to also explain Schrödinger’s Cat.

On the other hand, there is no mystery to the Born rule. The entire process of computing a wave-function and then computing the sum of the squares of its real and imaginary parts, amounts to nothing more than computing the power spectrum of a Fourier transform. The power spectrum (as the name implies) simply measures the energy accumulated/detected within each "channel' of a filter bank. When the energy happens to arrive in discrete, equal quanta, the ratio of (total energy)/(energy per quanta) yields the number of quanta accumulated within each channel. In other words, the entire mathematical procedure amounts to nothing more than the description of a histogram which is why it yields probability estimates. Every photon, gauge boson or quantum object that appears travelling at c to us is at the same time an "observer" of that part of the universe that involves its emission-flight-detection path (or better, "process"). It feels that space-time "chunk" of the universe as a single point of existence, with no distances and no intervals involved. That's why "paths" have no sense for them, or why wave/particle duality is not resolved until detection: because time intervals (or space distances, for that matter) have no sense for quantum objects that go at c. Their single bit of existence (from their point of view) is a probability function for us, until it collapses when we detect them. But for them, that collapse happens at the same "time" they are emitted, because there's no time involved in their experience of the universe, the whole "emission-flight-detection" process is experienced at once from their POV. So the universe would "exist" the same way without conscious beings, only it will not be "perceived" the same way you are used to (the space-time ratios conscious beings create in our brains). My point is specifically that the multiple interpretations of quantum mechanics means the philosophical question of whether the world is deterministic or not is still unsolved. By neglecting pilot wave theory due to its impracticabilities is reasonable concluding that the world is fundamentally random is not. All reasonable alternatives world need to be discarded not just a few fringe models.

Of course, The Uncertainty Principle is more fundamental than the Born rule. The former arise from the logical contradiction of trying to locate a particle with non-zero size in a point in space. A particle is not located in any one point in space but in a region of space that mathematically contains infinite number of points. There is contradiction between the math we're using and our physical intuition. We remedy this logical contradiction by imagining that the location of a point particle in a region of space is governed by probabilities. The Born rule uses the square of the amplitude and it works because it represents the area perpendicular to the velocity vector of an imaginary point particle.

One last piece of advice: next time lose the dog...

sexta-feira, agosto 30, 2019

Quantum Tunneling: "The Physics of Superheroes" by James Kakalios

“One aspect of quantum mechanics that is difficult for budding young scientists to accept is that the equation proposed by Schrödinger predicts that under certain conditions matter can pass through what should be an impenetrable barrier. In this way quantum mechanics informs us that electrons are a lot like Kitty Pryde of the X-Men, who possesses the mutant ability to walk through solid walls (as shown in fig. 32), or the Flash, who is able to “vibrate” through barriers. (illustrated in fig. 33). This very strange prediction is no less weird for being true. Schrödinger’s equation enables one to calculate the probability of the electron moving from one region of space to another even if common sense tells you that the electron should never be able to make this transition. Imagine that you are on an open-air handball court with a chain-link fence on three sides of the court and a concrete wall along the fourth side. On the other side of the concrete wall is another identical open-air court, also surrounded by a fence on three sides and sharing the concrete wall with the first court. You are free to wander anywhere you’d like within the first court, but lacking superpowers you cannot leap over the concrete wall to go to the second court. If one solves the Schrödinger equation for this situation, one finds something rather surprising: The calculation finds that you have a very high probability of being in the first open-air court (no surprise there) and a small but nonzero probability of winding up on the other side of the wall in the second openair court (Huh?). Ordinarily the probability of passing through a barrier is very small, but only situations for which the probability is exactly zero can be called impossible. Everything else is just unlikely. This is an intrinsically quantum mechanical phenomenon, in that classically there is no possible way to ever find yourself in the second court. This quantum process is called “tunneling,” which is a misnomer, as you do not create a tunnel as you go through the wall. There is no hole left behind, nor have you gone under the wall or over it. If you were to now run at the wall in the other direction it would be as formidable a barrier as when you were in the first open-air court, and you would now have the same very small probability of returning to the first court. But “tunneling” is the term that physicists use to describe this phenomenon. The faster you run at the wall, the larger the probability you will wind up on the other side, though you are not moving so quickly that you leap over the wall. This is no doubt how the Flash, both the Golden and Silver Age versions, is able to use his great speed to pass through solid objects, as shown in fig. 33. He is able to increase his kinetic energy to the point where the probability, from the Schrödinger equation, of passing through the wall becomes nearly certain.”

In “The Physics of Superheroes” by James Kakalios

One problem with travelling faster than light is that there is no requirement that this only happens in a time like dimension. What if Superman is passing through every point of a particular inertial frame within the lower bound for the smallest possible electro-chemical event? In that case, Superman can be decisive over every action within a particular light cone as it propagates (except that we don't know what impact the interface between the quantum world and the classical world has, and at what point the laws switch over. Schrödinger's cat magnifies the quantum up to the large classical, but at the chemical-electrical level that you mention Superman could become subject to quantum fluctuations which would not be good for his health, never mind the control of subsequent ramifications of events in the light cone.) I believe your reliance is perfectly justified. Both Marvel and DC have embraced a form of the Many Worlds hypothesis of Hugh Everett. DC's "Crisis On Infinite Earths" story-line is based precisely on this notion. I guess I am relying on a 'many worlds' interpretation and always choosing the one where Supers prevails. From the way I understand quantum tunneling, and I do know a thing or two about quantum physics, the matter tunneling through would simply teleport onto the other side of said wall or bad guy as opposed to moving at a constant speed through the space between the two. Along with the fact that quantum physics typically only applies to members of the quantum realm (rip Mrs. Pym)

I can however pose a different theory. The amount of space both between atoms in molecular structures, as well as within atoms themselves, is massive. There is more empty space in an atom than actual atom parts. The reason that atoms can't move between these spaces, and why we can't walk through walls (I know Heinlein wrote a novel called “A Cat Who Walks Through Walls”…), is the negative change in electrons; and yes, I do know Kakalios touched upon this. The negative charges, while very weak, are like two magnets aimed negative sides at each other. It's hard to touch the two together. However... if you disabled the negative charge, there would be no problem at all moving through a solid object. But wait! The negative charges are actually what keep atoms together... and they bond molecules... so... if you disable the charges then the entire structure would basically atomize into nothingness... not fun unless you're a villain from James Bond. So here is the possibility. What if instead of fully cutting off the charge, they just have the ability to reduce it? Lower the electron charge constant within themselves to a perfect balance... right between weak enough to phase through, and strong enough to still hold the molecules together? Finding this balance would allow the physics of phase shifting to work perfectly. Actually, I think the Pauli Exclusion Principle plays a much larger role in preventing atoms from passing through each other. Even without the charge of the electron getting in the way, an atom trying to pass through the electron cloud of another atom would run into the issues of degeneracy pressure: electrons, protons and neutrons are fermions and cannot occupy the same quantum state as another fermion at the same time. Electrons are not merely particles, they exist in a superposition of their possible states in any particular orbital around the nucleus, so if the electron cloud of one atom passed through that of another, it would require those electrons to occupy the same state, because their superpositions would overlap. Quantum tunneling on the other hand, would still make some sense. As I said above, it occurs instantly, but that doesn't mean the superhero would have to phase through the wall instantly; it would make much more sense for their individual particles to tunnel simultaneously and rapidly through each atom in the material they were passing through, giving it the appearance of a continuous motion-- assuming theirs had the ability to control the quantum mechanical action of all of their atoms.

Ok. “Tunneling” is one thing. What about the rest of the Superheroes stuff? Nerd train incoming...

The problem with Superman goes a lot deeper than the points Kakalios touched on. It seems that the creators back in 1933 had a different concept for the character. As I understand it, Superman could not fly when he debuted in Action Comics #1. He could only leap over things that were beyond the range of a normal man. He had quick speed and had great strength. Not sure if he had x-ray and heat vision, or if he blew with the force of a hurricane and froze objects with his breath. Clearly he was not the god he has become. The character changed from a man that could do superhuman things, to a man who is virtually indestructible and has the power to move the planet earth when America got into World War II. The country needed a hero that the Nazis could not stop. In fiction, Superman became that hero because as you pointed out, the limits of what he could do had not been defined.

By the time the war was over, Superman and his god like powers where here to stay and with each new story, it was revealed he could do whatever the plot called for to save the day. A cap on his powers was never imposed because the writers of Superman stories for comic books, television and the silver screen knew they could bring the world to the brink of destruction and no matter how great the challenge, they could tweak Superman's powers, thus allowing him to save the day. However, I agree that a character that cannot be harmed or killed poses a problem for storytelling. As a writer, I created my own superheros and wrote stories. What I found was you had to cheat for the villains to give them an upper hand. Usually this is accomplished by:

[A] The villain gets a hold of a weapon that has the power to kill the hero;

[B] The villain creates a being with equal or slightly stronger powers;

[C] You create a weakness in the hero's armor and allow the villain to exploit it.

This is why Kryptonite was wrote into the story. The only substance that can harm Superman comes from his own planet. When Krypton exploded, rocks from the debris were thrown out into space. Some of them entered the earth's atmosphere becoming meteorites. Harmless to mortals, these glowing green meteors are highly toxic to Superman. First of all it makes no sense that something from the characters home world would have a negative effect on him/her. It has been established that the closer Superman gets to the red Krypton sun, the more his powers fade. So in theory, Kryptonite would have to be remains from the red sun in order to have a negative effect. Rocks from the planet would simply absorb our sun's rays and give off energy that Superman could use. The fact that Kryptonite is green reveals that its inventors did not intend for it to come from the red Krypton sun.

The destruction of Krypton in “Superman The Movie” (1978) was the result of the sun becoming a red giant and dying. “In Man Of Steel” (2013) careless mining of Krypton's core leads to the planet exploding. However the established conclusion is Kryptonite comes from the planet and not the sun. That's one reason why this is not a good weakness for Superman.

The problem I have with Superman is that the whole franchise is taken too seriously when in essence it's an OP dude who will always have enough power to defeat his opponent. The guy can literally fly into the sun for a recharge. Aside from his super strength he has super speed, super hard impenetrable skin, super senses, and super X-ray vision. He's practically impossible to defeat - neither by a direct assault, nor by subterfuge. And since Shad mentioned that at some point Superman gained the power of telepathy - it's impossible to plot anything against him in secret. And the kryptonite - no one can even get close to Superman with that thing since - as I've mentioned above - he has all the super senses: he can see, hear and - I'd argue - feel where the kryptonite is approaching from: he can simply use a pole or his super breath to swat it away. And yet the authors are trying to convince us that he is constantly in some sort of danger - he is not. That's why I prefer Saitame the One Punch Man - the author is aware that the hero is overpowered, and that's why the whole manga is taken light-heartedly - the hero is never in any sort of danger, he destroys all his opponents with a single punch, and he is not feeling any pride in his victories over foes that could destroy the whole planet. Instead he's bored. This type of story is a lot easier to understand, and it does not require the fans to think for the lazy authors and come up with Deus Ex Machinas to cover all the plot holes.

In a nutshell, the problem with Superman is that his powers are too great.

In the first Superman movie he turns back the clock. It may look like what he is doing is 'reversing the polarity of the Earth’s spin (he increases his relativistic mass by 13.7m times by travelling close to the speed of light' but that is just half of the story. The other is that the whole arrow of time reverses because he is flying faster than light. Since Superman has super intelligence, what he ought to do, in order to minimize crime, is to go back to whatever day his decision to fight crime became operational and micro-manage things till the best possible state of society and situation for each individual is achieved. So no more biffing super-villains, just lots of little nudges- e.g. making sure baby Lex Luthor got a nice Teddy from Santa instead of a Ray Gun.

I just wish Kakalios hadn’t limited the book just to high school algebra. Some topics required higher math to be properly addressed.

NB: Don't forget, Deadpool killed the marvel universe. Since he is totally aware that he is a fictional character, he could easily read the DC comics and movies too and kill all of them as well. Did they discover that-far from gaining superpowers from their doses of radiation, The Hulk and Spiderman would actually be bald and sterile and not long for this world? Batman has been shown escaping Darkseid's Omega beams through agility alone. Flash barely escapes those beams and Superman, himself, gets hit by them while actively trying to escape them. Batman must, on the other hand, therefore, have superpowers in terms of strength and agility. More to the point, surely Batman's cape is nowhere near big enough to enable him to glide in the first place. A real-life hang-glider would be far too big and cumbersome to allow flight between the towers of Gotham City. I suspect that a real-life Batman would have to live on Mars to be able to do what he does.

I didn't also see anything about lightsabres in Kakalios’ book...Here’s my suggestion physics-wise:

A replica handle containing a ring of lasers and reflectors and a retractable radio aerial style fencing foil/blade, extremely strong and rigid so it doesn't bend when swiped or on contact with solid objects (similar to an extendable baton). Attached to the end of the foil/blade is a small reflector disk, facing back towards the handle. When activated the foil/blade smoothly extends to its full one metre length in approx 0.25 seconds. Simultaneously a series of lasers are emitted from the handle towards the reflector disk on the end of the foil blade creating loops of lasers and the appearance of a solid, rounded blade. The rapid extension of the light blade as it is switched on would be visible to the naked eye as it would be a mechanical process rather than the instant switching on of a laser. The reflector disk on the tip of the blade would be externally illuminated to the same colour and intensity as the laser blade to disguise it. The inner foil blade would be largely protected from damage in combat by the laser blade surrounding it, which would do the cutting. Authentic sound effects would be motion or contact activated. All you need to do now is sort out how to power it, wireless is not possible but add a long cable hidden up your shirtsleeve/down your trouser leg and a power source and you could probably make one good enough to pop balloons now. I think I need a lie down. Suspend the reflector in a magnetic field, a metre away from the handle - no need to have an extending baton. Switching the device off causes the field to shrink, returning the reflector to its housing at the end of the handle. The laser would cut anything they came into contact with, but the magnetic field would repel the field of another sabre, thus appearing solid when striking the beam of another sabre. That's how it worked in my back-of-an-exercise-book doodles in the late '80s. Unfortunately, blasters and lightsabres have a bigger problem than accuracy, and that's the fact that they vary so much in how much damage they do, without ever being adequately defined. Sometimes (when it's a main character) blaster bolts can just give a nasty burn, other times they're insta-kill. Don't even get me onto the purpose of armour in star wars, when shots to armour are from what I've seen, far MORE deadly than shots to unarmoured characters..

quinta-feira, agosto 29, 2019

FaceApp: "Fake Physics: Spoofs, Hoaxes and Fictitious Science" by Andrew May

“For a SF writer, or anyone else, to produce ‘fake physics’ that might even fool a professional physicist, it has to look much more like the real thing.”

In “Fake Physics: Spoofs, Hoaxes and Fictitious Science” by Andrew May

“Technically Quantum Theory is a branch of physics, but it’s quite unlike any of the others. It doesn’t involve any authoritarian ‘laws of physics’ that you’re not allowed to break. Relativity says you can’t travel faster than light. The Second Law of Thermodynamics says you can’t have perpetual motion. Quantum Theory, on the other hand, says you can do anything you like. [...] It’s based entirely on jargon, which you can use to mean whatever you want it to mean. A few examples are: ‘Nonlocality, ‘Entanglement’, ‘Wave-Particle Duality’, ‘Hidden Variables’ and ‘the Uncertainty Principle’. Feel free to use these terms in any way you want: no-one else understands them any better than you do.”

In a spoof story called “Science for Crackpots” reprinted in “Fake Physics: Spoofs, Hoaxes and Fictitious Science” by Andrew May

“Is the Yeti the same species as Bigfoot or a different one? Does the effectiveness of telepathy depend on the distance involved? Why does the temperature drop when a ghost is in the room? How many members of the US congress are shape-shifting reptiles? If mainstream science [as opposed to crackpot science] addressed questions like these, people might start taking it seriously. ”

In a spoof story called “Science for Crackpots” reprinted in “Fake Physics: Spoofs, Hoaxes and Fictitious Science” by Andrew May

Do you believe FACEAPP App really ages you? (*)

Do you believe in Climate Change? The problem is what you mean by it. This illustrates very nicely the issue with science, verification of results, the different kinds of bad science, and the relationship of science to public policy. Who are the deniers? What do they deny? And how do you think ruling out fraud from the key papers is going to enable you to purge them? Made up data is the least of our problems with climate science. I know of only one alleged instance, where a series was allegedly illegitimately extended by simply infilling. The problem comes when there is real data, in the form of lots of proxy series, but one picks only those which show what one wants. Is this fraud? Or is it bad judgment? Or is it perhaps a legitimate attention to some very disturbing and important data? One then picks a statistical treatment which is not recognised as being legitimate or optimal. Is this fraudulent? Probably not. One is not a statistician, the study has passed peer review. People may disagree, but so what? People who are sceptical about the merits of dramatic CO2 reduction programs, the Paris Agreement, Wind and Solar, high values for the Climate Sensitivity Parameter are very rarely alleging fraud. What they are arguing is that some scientific propositions have not been shown to be correct, and that the public policies supposed to react to them are not fit for purpose even if they were.

The problem from bad science flowing into public policy seems to me many times greater and more costly than the problems from fraud. And the result is that even if you hit fraud in the way the article suggests, you will still be left with the problem of bad science. The solution is better, more critical, peer review. More and more prompt disclosure of data and methods. And more critical coverage in the generalist press. And less talk of purging the unbelievers!

Climate science makes pretty much all of its raw data available for just this kind of purpose in fact: and model code is typically either open source or source-available (i.e. you have to sign something and what you can use it for is restricted but it does not cost money). *Running* models is harder because their configuration is extremely fiddly and usually dependent on details of the computational environment they live in. Also you need significant computational resources to do anything non-trivial.

I've done some private work on making tools to post-processing tools in fact, and I'm generally interested in this area. There is a lot of work to do to make things more accessible, inevitably, and a lot of the technical problems are not well-understood by people with science backgrounds. The deniers do not like this as more data, more easily processed, is very harmful to their interests. There has been a bad history of datasets becoming unavailable under denialist administrations in the US, and this seems likely to happen (and may already be happening) under Trump. This vanishing of data by denialists is reasonably terrifying (very terrifying in fact) as it makes it hard to dispute their lies, which is the point of course.

What about peer review you may ask? Ah yes, Peer Review, the practice of prestigious journals asking scientists to edit and review research articles for free, then charging their universities for access to that research. The practice were scientists submit papers to journals edited by their colleagues who ask friendly colleagues to review them favourably. Peer Review isn't worth much. Transparent and reproducible analysis that can be scrutinised by anyone, including algorithms, is the way forward.  As in all areas of human activity there will be those who are unethical, but for the most part scientists and editors have high integrity. The major problem in my experience is not fraud but incompetence, compounded with protectionism. The protectionism is a bigger issue which also needs unpacking. Using the wrong tests is actually a much more important source of error than simple mathematical errors and this more important problem will not be picked up. I agree that transparency is critical, but this kind of "vigilantism" will not generate the desired outcomes. We need mandatory Open Data, open data sharing infrastructures, and competent reviewing. Fresh air is an excellent disinfectant and we dont need antiseptics.. at least not yet. Most of the time this works well, but there are multiple cases where "bad science" gets through. There are also folks who announce results that haven't been through peer review. To give a few examples:

Árpád Pusztai who stated that his research showed feeding genetically modified potatoes to rats had negative effects on their stomach lining and immune system. Pusztai's experiment was eventually published as a letter in The Lancet in 1999. Because of the controversial nature of his research the letter was reviewed by six reviewers - three times the usual number. One publicly opposed the letter, another thought it was flawed, but wanted it published "to avoid suspicions of a conspiracy against Pusztai and to give colleagues a chance to see the data for themselves," while the other four raised questions that were addressed by the authors. The letter reported significant differences between the thickness of the gut epithelium of rats fed genetically modified potatoes, compared to those fed the control diet. The Royal Society of Medicine declared that the study ‘is flawed in many aspects of design, execution and analysis’ and that ‘no conclusions should be drawn from it’. For example, too few rats per test group were used to derive meaningful, statistically significant data.[11]

Jacques Benveniste who published a paper in the prestigious scientific journal Nature describing the action of very high dilutions of anti-IgE antibody on the degranulation of human basophils, findings which seemed to support the concept of homeopathy. The controversial paper published in Nature was eventually co-authored by four laboratories worldwide, in Canada, Italy, Israel, and France. After the article was published, a follow-up investigation was set up by a team including physicist and Nature editor John Maddox, illusionist and well-known skeptic James Randi, as well as fraud expert Walter Stewart who had recently raised suspicion on the work of Nobel Laureate David Baltimore. With the cooperation of Benveniste's own team, the group failed to replicate the original results.

Gilles-Éric Séralini who published a paper in Food and Chemical Toxicology in September 2012, the article presented a two-year feeding study in rats, and reported an increase in tumors among rats fed genetically modified corn and the herbicide RoundUp. Scientists and regulatory agencies subsequently concluded that the study's design was flawed and its findings unsubstantiated. A chief criticism was that each part of the study had too few rats to obtain statistically useful data, particularly because the strain of rat used, Sprague Dawley, develops tumors at a high rate over its lifetime.

Fraud exists, poor reviewing exists, incompetence exists. A study of these high profile cases can help understand weaknesses in the process of publication and dissemination, and I think we learn from them. What's been quite heartening is that the increased concern - interestingly kicked off by the Pharma industry when they couldn't reproduce experiments from the literature - is now driving improved policies and adoption of mandatory open data provision. This has lead to increasing awareness in the scientific community and I hope that will improve things more. The need to work towards good practice everywhere is evident but thats no reason to damn the whole of the scientific enterprise as flawed and fraudulent. Have you ever herad of "p-hacking"? If you have ever done any real data analysis you will know that you try many things. You use different variables, different transformations of those variables, different sample selections, different models. The model you publish is obviously not the model that showed anything but the model that shows something interesting. But the procedure above invalidates p-values, they then no longer protect against finding spurious findings 95% of the time, if they are less than 5%. This is why so many results in science do not replicate.

Incidentally though, Asimov's 'Thiotimoline' story wasn't intended as SF. He wrote it when he was writing his PhD thesis, and was worried he wouldn't be able to fit in with the obligatory fairly turgid style - so it was a kind of spoof writing exercise. He showed it to an editor (John Campbell as Andrew May’s correctly states but without telling the all story) who liked it and wanted to publish it. Asimov agrees, but only if a pseudonym was used, as he didn't want the Doctoral committee assessing him to think he wasn't taking it seriously. To his horror it was published under his own name. Though all turned out well. Apparently the final question in the viva voce asked him to discuss the properties of his imaginary substance thiotimoline - and Asimov collapsed into laughter, realising they wouldn't have done this if they weren't going to pass him. There is so much stuff called science fiction today we need a method of rating the science in the fiction. But so many readers do not care and the viewers are even worse. And most of the media just cares about collecting eye balls.

How about archetypes for comparison to rate the science?

#1. Cat and Mouse by Ralph Williams

#2. The Servant Problem by Robert F. Young

#3. Omnilingual (Feb 1957) by H. Beam Piper

#4. All Day September by Roger Kuykendal

#1 was nominated for a Hugo but lost to Flowers for Algernon so it should not be bad but it says nothing whatsoever about the "science" or "technology" enabling the story. An alien just makes things happen as though by magic;
#2 is unusually similar to #1 in that the technology driving the story performs the same function but the writer offers a kind of explanation mentioning mobius loops and has a little astronomy;
#3 is a mixture of speculation about future technology combined with considerable discussion of real science regarding physics and chemistry;
#4 is strictly hard SF and contains nothing likely to be impossible at some time in the not too distant future. It is in fact curious in that it is a Moon colony story 10 years before the first Moon landing in 1969 and a prospector finds water on the Moon which was actually found in October of 2009. The story also has a little chemistry. It brings to mind Arthur C. Clarke's "A Fall of Moondust";
#5 one of the characters attempts to get near light-speed using inertia-repressing technology and ends up with a ship full of pulverised goo where her crew used to be.

(*) All the imaging and algorithmic tools are based on Convolutional Neural Networks. ConvNet is just the machinery to interpret localized features (pixels, superpixels and so on). It seems these features are similar to textures but they are not exactly the same. This beautiful paper is about visualization of what is actually learned (memorized) about images based on this technology:

I think it is important for you to know how the machinery behind it actually interprets pixels of images.
FaceApp is most likely based on end-to-end trained networks. In other words, they do not have to tell the algorithm "this is nose wrinkles, this is smile lines" etc for each component. They just throw millions of faces, young and old, at it. The algorithm learns the underlying patterns, by asking it to decide if the output is a real old person, or a fake old person. I do not know FaceApp's actual algorithm, but I'm pretty confident it is based on CycleGAN, DiscoGAN, or similar. There's a whole zoo of Generative Adversarial Networks nowadays. So, the answer to the question is no. They just store your pictures somewhere else and they throw FACEAPP's "algorithm" at them to get at your aged self...
For fun I uploaded the photo which depicts my hangover after two days of binge drinking in the application and it gave very weird results: my older face was very similar to Boris Johnson's. Is there anything wrong with me? My bathroom mirror has a similar app. Whenever I look at it I see an old bloke with whitish beard and a lined and jowly face instead of the fit thirty year old sex god that I still am mentally.

SF = Speculative Fiction.