Skip to main content

Neural networks cannot learn anything without information.

We can think that the neural network is like a factory. Information is the raw material that this factory uses. Sensors send the raw information to the neural network. And this information factory's product is processed information.

During information processing, the neural network interconnects information that it gets from different sources. And learning means that this system creates behavioral models for similar situations. Using ready-made templates. That made by using experiences makes reactions to similar situations easier and faster. 

Learning neural networks are systems where the data sources are interconnected with data-handling units. But without information those networks are useless. 

When neural networks categorizing the trusted pages that they use, they can make that thing automatically. The AI can use the user as an assistant, and show homepages to that person who estimates the data. And then the user can say if the data has enough high value. Or the user can simply put the home pages to a trusted list. 

That means if the user wants to use AI for physics that means that person can put things like "Phys.org" and "scitechdaily.com" into trusted page lists. And that means the AI uses those pages to select data. That kind of thing requires high skills to analyze information. So in this model, the system has a prime user. Who selects sources. That the AI uses. And then other users can use trusted data in their work. 

All systems need information for learning. And the problem is that when the system learns things. The system doesn't know whether is information that it uses is real or fake. This is the thing with all neural networks. And one of the neural networks is the human brain. The problem is this: without information, the system is like an empty paper that cannot do anything. 

Learning is that the system makes the models using the information it gets from sensors. Then the system can escalate that model to all similar situations. That thing means machine learning and human learning are similar processes. The machines get information the same way as the human brain. But there is one big difference between those learning models. The AI doesn't understand the meaning of the words. 



AI is an impressive tool. And there are trillions of observation systems that are transferring information to AI-based systems. NASA and other organizations send space probes to take information from distant planets, and the AI can interconnect that data with other systems. In this version, we are talking about AI-based systems in astronomy. 

The AI can search and follow the changes in the brightness of stars. That helps it to find new exoplanets. But without simultaneously taking images the AI cannot compare the object's brightness within a certain period. So without the telescope, the AI is useless. 

When we are celebrating things like ChatGPT and other AI systems and telling how good answers they can make, we must realize one thing. Those systems use certain categories in how they choose the homepages where they get information. And that thing means that the system might not know. Is the information on those home pages trusted or faked? 

There are certain parameters in how the AI selects homepages where it gets information. And there is a theoretical possibility that somebody makes a practical joke to the person who uses AI for making the thesis. The joke could be that some other person would change the information that is on the trusted page for a moment when AI uses that page. 

In this version, the AI makes mistakes because it doesn't know what the text means. And there is the possibility that by changing the text on the trusted homepage to things like lines from Donald Duck the AI would put that thing to the thesis. Fixing that error is quite easy. The AI can use two or more home pages. And if there are big differences, the system can introduce those homepages to the user. The user makes the decision that is information valuable. 

So that means AI requires a new type of skills in working life. The person who uses the AI must have skills to estimate the text, find conflicts and then estimate the source. 

Science advances very fast. And that thing means the AI must have parameters that it selects only the newest possible data. And that means the AI must use only the last updates on trusted homepages. The problem is that our "practical joke" would be the last update. 


Comments

Popular posts from this blog

Quantum echo in a superconductor can improve quantum technologies.

 Quantum echo in a superconductor can improve quantum technologies.  "A surprising “Higgs echo” discovered in a superconductor reveals hidden quantum behaviors, offering tantalizing possibilities for future quantum technologies. Credit: Ames National Laboratory" (ScitechDaily, Scientists Discover Mysterious “Quantum Echo” in Superconductors) Quantum echo in a superconductor can improve quantum technologies.  In the image above, the text is the situation where the particle. Which is like a sombrero in its quantum field, faces the ring-shaped quantum field faces the ring-shaped quantum field. If that energy hill is in the ring-field. There is a possibility of transmitting waves into that ring-shaped field. And if there are two “almost” identical ring fields, the system can make superposition and entanglement using those fields. The fact is that the field should not be smooth. The “hills” and “valleys” over it form the gear. There are those “hills” and “valleys” that are thi...

Chinese highly automated dark factories.

 Chinese highly automated dark factories.  Chinese dark factories are so highly automated that they don’t even need light. This is the future of manufacturing. Robots working in complete darkness. Those systems are controlled by the AI. And that makes the assembly process very effective. Human-shaped robots can make almost everything that humans can.  An advanced machine learning toolkit allows operators. To teach machines new things. very effectively. The system needs only one robot. That can make something, and then that system can scale that new skill over the entire network. That makes robots very effective workers. And maybe those ghost factories are starting to operate in Western cities. Robot factories are interesting. Especially if human-shaped robots work in that factory.  That is the thing. That makes the AI dangerous. The ability to teach one robot and then scale that thing to all robots makes it possible to teach robot armies very effectively. The same hu...

The universe will end its existence in the Big Crunch.

  "A Cornell physicist proposes that the universe is only halfway through its 33-billion-year lifespan, and will one day reverse course. Based on new dark-energy data, Henry Tye’s model suggests the cosmos will stop expanding in about 11 billion years and ultimately collapse into a “big crunch,” ending in a single point. Credit: Shutterstock" (ScitechDaily,  The new model suggests that the universe. Ends its existence. In the Big Crunch. The Big Crunch means that gravitation wins and all particles fall into the center of the universe. Then that ultimate black hole detonates and forms a new universe. The reason for that is. The distances between particles and other objects will increase. That makes gravitational interactions weaker. In the same way. The other three fundamental interactions turn weaker.  And another thing is that. The universe leaks. The wave movement or energy travels faster than most particles. And that means wave movement travels before particles. That m...