How could an Artificial Intelligence acquire such knowledge based on the input of humans
That's the easy part really, AI can/will learn everything that's ever been published.
That only goes a little bit of the way , it's knowledge, not intelligence.
After that it's more interesting.
Given that it's way better at say, maths than we are, AI can work things out in the same way that humans worked things out.
How light is a form of EM radiation for example - it comes from the maths. It's obvious once you've seen it, but it took an exceptional mathematician (Maxwell) to find it. Obvious if you've done uni-level maths, that is, which excludes most...
General and special relativity - come from the maths. Ditto, Einstein.
In areas like string theory, it can re generate and therefore verify theorized equations, and produce new ones. That involves 10^500 dimensions which takes a long time to work out things for with pencil and paper...
It was used to discover the Higgs boson.
It is e.g. working out new configurations for magnetic control in Tokamaks (for fusion power).
So, acquiring knowledge is well in scope.
There are little things like having enough cubic miles to put the processors in, getting in the way, but AI compute has been growing at about 5x per year. So 10 years will be ~ 10 million times better. Fairly bright.
I am of the opinion that I'm just a collection of atoms which has evolved. Things like emotions are a predictable development in the conscious state we have. Inventions like religions are predictable nonsense.
If you break down what emotions are and ask whether they can or could be applied to or found in AI, then the answer is a partial no and a partial yes, and a partial "don't know yet".
The biological hormones and whatnot are missing but the computation generation of behaviours which humans would think were emotions are easy.
There are already working, analogues to feelings, which affect computational states.
Value systems can be put in, or the AI can work them out from seeing what drives humans.
AI can behave in response, "focus" on a problem requiring maximum processing or do do explorative day-dreaming if the processing load is low.
So those are like functional equivalents to emotions, for motivations and adaptive behaviour.
Subjective awareness is the difference between say feeling cold, and knowing that the temperature is too low.
To what extent AI ends up with a point of view, or have a measure of happiness, is still open.
Consciousness - tackled scientifically, gets divided into areas such as the extent to which a system knows about itself, how it relates to its world (global workspace) or its survival.
So far, AI doesn't show any signs of having subjective experience in these terms. But it might appear to do so in future. Is that the same as having them? That's philosophical. How would we tell? I think it's inevitable that AI will soon get to have those "feelings" stored as numbers, where in part we store them as chemicals.
When the machine is having ethical concerns, or values things, based on what it has worked out for itself, it's looking to be pretty human. But there's another raft of human thinking-architecture that none of this addresses. A metacognitive aspect, which gets complex.