Author Topic: CNPS General Discussion  (Read 1340 times)


  • Administrator
  • Sr. Member
  • *****
  • Posts: 382
    • View Profile
Re: CNPS General Discussion
« on: May 21, 2017, 01:56:44 pm »
Hi Bruce. You said:  I'm sure others have analyzed the MM theory.  If we could find reports already written about this, it would save a huge amount of our time.

I doubt if there are any unbiased reports. Plenty of people, including scientists, are interested in MM's ideas, but they don't take the time to make a very meaningful report.

I looked briefly at the CNPS Wiki and it looks like it will be merely a collection of alternative science papers. I don't think that will be very helpful. I think what would be helpful is establishing a system for evaluating (while minimizing bias) theories and claims and publicizing the best ones (and only links to others so readers could see why they don't make the grade, which could lead to improving those theories too).

Making the list of essential elements of each theory or claim, as you suggested, would be important. But then there needs to be a process for evaluating each element too. I guess I could try working on such a process on the CNPS Philosophy of Science forum. Readers could be tested on logic and on knowledge of a theory's subject matter before they could qualify to evaluate the essential elements of a theory. Then CNPS could publicize the best theories. Mainstream theories would need to be evaluated too, so the public can see why alternatives are sometimes better.


Sunday, May 21, 2017 2:45 PM
_I agree with the essence of all of your points. So, here's how I would follow them:
_If you can find ANY MM reports, I think this would be a benefit. I agree, they will be biased. But I'm looking more for a "checklist" of critiques rather than final resolutions. We would also be starting a bibliography on the topic.
_I have the same observation about the Wikis. At the present time, we don't even have poor histories of prior criticism. Given we can get a collection of critiques, for any topic, then we can address your additional concerns.
_Your point about establishing a "system for evaluating (while minimizing bias) theories and claims" is my next TOP priority. I actually tried to find such a system by doing a fair search on the topic of "peer review". Wouldn't you expect someone has addressed this before???  What I found was terrible. I've attached my summary of what I found. It is still a feeble approach. You touched on this again in your last paragraph.
_Your point about then publicizing "the best ones" I think is good, but only a partial goal. What would be just as helpful is publishing a summary of what elements of ALL the papers were good breakthroughs, and what elements appeared to be flaws which are simply repeats of often repeated flaws.
_Your last paragraph brings up "reader testing". This is a sensitive issue if we try to grade ALL readers. What I think is a good solution is to reward great Peer Reviewers. That stays on the positive side. The other thing that will become an indirect measure is just the "rejection", by peer reviewers, of things people say, without calling those members out by name. This depends on how well we can develop a peer review system and methodology.
_So, all of these items should be HIGH PRIORITY for us. We can both test them out in our structured forums (... and I admit, I'm still way behind getting mine going.)

CNPS Peer Review Guidelines [from web search]
_Scientific progress depends on communication of information that can be trusted. Reviews should be objective evaluations of the claims presented.
_The core values of peer review are
1. availability – does the reviewer have the time to do the review by the deadline?
2. expert assessment – does the reviewer have the background to do the review?
3. transparency – the process is open for review by others
4. impartiality – the review is not biased by social background of the submitter
5. fairness – the review is not biased by social acceptance of the science presented
6. integrity – the review is not biased by financial, social, religious or philosophical background of the reviewer. The reviewer presents all significant findings, both positive and negative
_The reviewer will not make ANY personal comments. For example, it is not appropriate to write: “The author clearly has not read any Foucault.” Instead, say: “The analysis of Foucault is not as developed as I would expect to see in an academic journal article.” Also, be careful not to write: “The author is a poor writer.” Instead, you can say: “This article would benefit from a close editing. I found it difficult to follow the author’s argument due to the many stylistic and grammatical errors.”
_Technical Rigor is expected. Data and arguments are to be addressed or clarified substantially.
_Reviews must be constructive and be presented in a courteous tone.
_the reviewer will respect the intellectual independence of the author. When writing a review, be mindful that you are critiquing the article in question – not the author.
_During the review, the reviewer will be expected to do the following:
1. Mark up the copy. Things that should be marked are:
all important points. Use reference numbers that index the points for longer discussions made in a separate notes area; errors in graphs and tables, spelling and grammar,
1. Before starting to read, make sure you have:
 a. tools to mark the copy. b. a method to make notes as you read.
The notes should have the following sections: questions; things that seem to be mistakes;
2. Read the article.
3. Make a simple outline of the article. Write a brief 3 or 4 sentence summary of the article. List its major contributions.
4. Write a draft of the review. If the review is favorable, write a longer summary highlighting the strengths. The structure of the review should be as follows:
a. Write out any major criticisms. Begin with the larger issues and end with minutiae.
b. Some major areas of criticism to consider:
Is the article well-organized?
Does the article contain all of the components you would expect (Introduction, Methods, Theory, Analysis, etc)?
Are the sections well-developed?
Does the author do a good job of synthesizing the literature?
Does the author answer the questions he/she sets out to answer?
Is the methodology clearly explained?
Does the theory connect to the data?
Is the article well-written and easy to understand?
Are you convinced by the author’s results? Why or why not?
5. Write out any minor criticisms of the article.
6. Address editorial issues; for example: mislabeled tables and graphics, misspellings and grammar.
7. Review the review.


5/23/17 8:50AM
_I got started on the Photonic Universe forum, including a list of essential elements of the model. Now I'm trying to start on the Electrodynamic Universe forum and the Catastrophism forum, since I have a sense of how to proceed.
_I wanted to sticky a couple threads that I had started before, but the stick option was no longer available once they were posted without sticking them, but the stick option was available by posting a second message in the same thread. So I was able to stick them, but I had to delete the second post, because you want just one post each in those threads that I post in exclusively. So it would be nice if the stick option would remain after first posting without sticking, instead of with being available only with the second post.
_Another issue is the date on the threads that I post in exclusively. They show the date of the first posting. Instead, they should show the date of the updated posting. Otherwise, readers will think the thread hasn't been posted to since the first posting. An example of this is on my thread: "Electrodynamic Universe - working paper".
_Also, when a reader opens the thread, the date of the first posting or edit should appear inside, and the last update at the top, maybe right above or before the first posting date. It might be good if each update date (not just the last one) were also listed inside, but not important.
_I'd like to experiment with "peer reviewers". I think any reader should be able to qualify as one by doing a short self-test on the forum.
_You said: "... we don't even have poor histories of prior criticism. Given we can get a collection of critiques, for any topic, then we can address your additional concerns."
--- Critiques sometimes contain good data, including on logic, but I don't think they're very important, because they take up time to review, interpret and discuss. I like to simplify a lot. Just one reader or peer reviewer is a good start for evaluating claims. I hope to try doing that before long myself, as a trial. Each essential element (claim or idea) of a model could be rated P for 70-100% probable, M for 30-70% probable (M for Maybe), or I for 0-30% probable (I for Improbable). I think all P ratings should eventually have explanations included, but wouldn't need to initially.
--- This simple method could be used for theories of any length. The sky is blue is a theory. A better theory would be that the sky is blue a certain percentage of the time etc. Long theories merely have more claims, each of which can be evaluated separately.
_You said: ""Your point about then publicizing "the best ones" I think is good, but only a partial goal. What would be just as helpful is publishing a summary of what elements of ALL the papers were good breakthroughs, and what elements appeared to be flaws which are simply repeats of often repeated flaws.""
--- It's not clear what you mean by "ALL the papers". Will you explain? The readers' (peer reviewers') evaluations of essential elements of papers should be made public and we should make it easy to see which elements are rated P, M, and I, then the ones with the most P's should move to the Wiki, IMO.
_I think it's also important to prioritize theory topics. Those that seem most important for the good of humanity and the ecosystem should have highest priority. Readers or peer reviewers should be encouraged to evaluate those first. CNPS should also display them by such priority, IMO.
« Last Edit: June 14, 2017, 07:28:55 pm by Admin »