Here's my possibly self deluded version of how I see my own behaviour in trying to understand information that I cannot directly test:
1) Doubt all sources of data and try not to get attached to any conclusions. Recognise that any data may be biased & may be poorly collected.
2) I use a hierarchy such that if say 5 different state statistics departments measuring their own countries, all produce fairly similar results, then this is strong data.
If say 6 universities in different parts of the world, staffed by different people, all using different methods, publish data that is very similar, this is strong data.
What is said by experts commenting on their field is good, but is an expert opinion. I take their views very seriously, but just as expert opinions.
Experts (say a cardiologist) making a definitive statement about viral infections is an opinion which isn't so good, because it is peripheral to the person's expertise.
When a chiropractor makes a statement, I mostly ignore it as it is a lay person opinion.
When an anonymous source on social media makes a claim, I mostly ignore it because life is too short to check every random idea.
What is said by politicians, what is written by journalists (especialy those who have no relevant training), are low quality.
If a statement from someone influential goes against the predominant evidence then I try to dig into the source to see if it is worth reassessing.
3) Over time the preponderance of evidence is something that settles in my mind and I become less interested in contrary claims. This may lead to ignoring relevant new data, but I think it is inevitable that after a time we reject contrarian thought on long supported evidence.
4) On many topics I don't have an opinion because I don't know enough. On some I have a lot of data, but I don't regard it as good enough to form much of a conclusion. On other topics the evidence seems substantial & it would take a lot to make me revise what I have accepted as probably true.