An Article about Reasons for manuscript rejection

The below article is copied and pasted From https://www.researchgate.net/institution/Editage/post/5a0ac296f7b67ecec61b383e_Top_8_reasons_for_manuscript_rejection to remind me of this article.

—-

Rejection is the norm in academic publishing. Even researchers at the top of their field have experienced rejection. Several peer-reviewed studies have investigated the reasons that journals reject papers. Listed below are the most common rejection reasons cited in these studies.

Lack of originality, novelty, or significance
One of America’s leading newspapers, the New York Times, recognized the truth that “journal editors typically prefer to publish groundbreaking new research.” Academic journals are constantly on the lookout for research that is exciting and fresh. Many authors tend to cite the reason that “this has never been studied before” to explain why their paper is significant. This is not good enough; the study needs to be placed in a broader context. Authors should give specific reasons why the research is important, for example, the research could affect a particular medical intervention, it could have a bearing on a specific policy discussion, or it could change a conventional theory or belief.

Mismatch with the journal

Many manuscripts are rejected outright by journals, before they even undergo peer review, because the manuscript is not appropriate for the journal’s readership or does not fit into the journal’s aims and scope.

Flaws in study design

Even a well-written paper will not mask flaws in study design. Indeed, this is a fundamental problem that must be resolved in the initial stages of the study, while conceptualizing the study.

Poor Writing and Organization

It is very important for authors to present a persuasive and rational argument in their papers. You should be able to convince readers that your research is both sound and important through your writing.

Inadequate preparation of the manuscript

Non-English-speaking authors often confront an additional problem: peer reviewers do not always distinguish between the manuscript content and style of writing. Thus, their manuscripts may end up getting negative comments even if the research is of high quality.

However, all the problems in this category are easily fixable, either by asking a native English speaking friend or colleague to review the paper or by getting the paper professionally edited and formatted.

—-

Advertisements

Grant search engines

University of Virginia recently shared funding discovery tolos for UVA.

Grant forward: https://www.grantforward.com

and

Pivot: pivot.cos.com

You can search upcoming grant information with deadline by subject areas (science, social science, education, etc).

Also, the Pivot helps to connect with other researchers within the university by setting up a My profile with specialties and contact information. Neat!

Tapestry Workshop 2016 for diversity in high school computing

Attending and assisting Tapestry workshop 2016.

The purpose of Tapestry workshop is a professional development workshop to increase numbers of diverse students in high school computing classes. Lighthouse team has offered this workshop annually since 2008 funded by National Science Foundation (NSF). Invited high school computing teachers, among applicant teachers, attend 3-day workshop. For more information about the Tapestry workshop visit, here.

How is this workshop relevant to my work?

I am a postdoctoral research associate for Lighthouse CC project. Lighthouse CC one of the project that Lighthouse team is working on. This also funded by NSF since 2015 and targeting professional development of community college computing instructors. Unlike face-to-face based Tapestry workshop, Lighthouse CC provides online courses in the format of MOOC.

Tapestry workshop and Lighthouse CC is targeting different audiences, tapestry is for high school teachers while Lighthouse CC is for community college instructors. However, the goal is same: to increase diversity in computing courses including females and under-represented minorities. Considering (1) both projects are for professional development,  (2) for adult learners, and (3) Lighthouse CC is focusing on creating an online learning environment in which can provide face-to-face workshop like learning environment, my goal is to get some ideas to improve our online learning environment design by observing Tapestry workshop.

 

I will share some of my notes here at the end of the workshop, Day 3.

 

 

Notes from the AECT 2015

At Indianapolis, Hyatt Regency Hotel.

Nov, 6, 2015

  • My presentation, Collaboration study: talked with Dr. Brill after my presentation, she submitted a paper about ID students’ learning from peer-reviewing. Among future research topics, audiences are interested in: (1)  why both higher and lower scorers outperformed when they were in a lower performing group than in higher performing group.
  • Engagement in PBL: got a contact from a middle school Spanish teacher at Fairfax, VA. Her students are mostly Koreans and the school principle is very opened to do some studies in the school She said, she will contact me to talk further.
    • Suggestions on the engagement survey
      • for the behavioral engagement survey item, asked put more time than other projects; providing scales; for example -50%– -25%—0%–+25%–50%
      • review the survey and revise the items, if it asked positive responses to students. Make every item neutral.

Nov, 5, 2015

get the consultation about How to keep developing and expanding research agenda and paper from 5+ years experienced faculty.

 

Nov, 4, 2015 

#5 case-based MOOC (Won)

  • CBL eLearning designs to address ethical dilemmas in genomics and also lead to satisfaction with learning
  • Project(grant) goal: “develop a university level, case-based, online course on Ethics in Genomics for broad dissemination through a MOOC”
  • they hired a videographer, used WebEx
  • Micro-collaboration model?Aleckson and ? <–ask Won
  • Research questions: continuous formative assessment
  • Richey & Klein, 2007, design and development research.
  • Aleckson & Ralston-Berg (2011), Mindmeld: Micro-collaboration between eLearning designers and instructor experts.
  • rubric development? ; who developed and how? and who evaluated?–> Rubric sample
  • To track student learning achievements, they checked their content knowledge
  • they analyzed students satisfaction with open-ended survey) –> only?
  • Results
    • students need more options of cases, giving them options for completion pace–> for LCC MOOC, I think it would be great to give completion pace. why not?
    • students prefer discussion, but break discussions with small numbers of students
    • hated essay type format

#4 MOOCocracy (Jaimie Loizzo)

  • checked motivation, success, and completion
  • adult leaners social science MOOC experience (instructor, social, cognitive presences; with community of inquiry–> I think this is one of the thing that I can do for LCC MOOC
  • narrative analysis: asked students about the “meaning of success in MOOC course” for them?
  • Barriers: time and course design (dislike videos, discussions, etc)

#3 Detecting at-risk students in asynchronous online discussion

  • “at-risk students” is not an appropriate term…what else?; lower performing students, potential
  • used proxy variables: when not actual var is not measurable, use components of actual var.
  • used Random Forest: a prediction method with a relatively small sample and large number of predictors (Bureau et al., 2005)
  • active learners or non-active learners will predict final learning outcomes?
  • Implication: pre-diagnose to provide appropriate or personalized intervention/scaffolding

#2 Relation between visual attention and learning analytics dashboard comprehension

  • with eye tracking system, Begaze
  • learners graph literacy affect use of LAD

#1 Learning Analytics Dashboard (Dongho)

  • seven variables that will affect behavioral engagement
  • set hypotheses and ran SEM
  • Findings Implications: learners need to understand the usage of LAD and facilitate to do reflective use
  • operational definition of LAD (in Edx it is called as progress check)

What is Learning Sciences?

According to the International Society of Learning Sciences (ISLS),

“Researchers in the interdisciplinary field of learning sciences, born during the 1990’s, study learning as it happens in real-world situations and how to better facilitate learning in designed environments – in school, online, in the workplace, at home, and in informal environments. Learning sciences research may be guided by constructivist, social-constructivist, socio-cognitive, and socio-cultural theories of learning.”

Methods of Research: Qualitative, Quantitative within design-based research

“Although controlled experimental studies and rigorous qualitative research have long been employed in learning science, LS researchers often use design-based research methods. Interventions are conceptualized and then implemented in natural settings in order to test the ecological validity of dominant theory and to develop new theories and frameworks for conceptualizing learning, instruction, design processes, and educational reform. LS research strives to generate principles of practice beyond the particular features of an educational innovation in order to solve real educational problems, giving LS its interventionist character.”

Resources from Wikipedia

Teaching Large Classes Conference (Jul, 23, 2015) at VT

2015-07-23 08.13.44
LINKS TO CONFERENCE SCHDULES AND PROPOSALS
Interesting Keywords from sessions:

1. Keynote speaking

learning analytics from echo360

2. Self-development study

  • Individual sustainability/sustainable personality: social, intellectual, health, emotional, economic–> but was not relevant to learning.
  • cognitive dissonance

3. Music Model for Motivation (Dr. Brett Jones, VT)

  • eMpowerment: decision making power
  • Usefulness: relevant to career goals, interest, and real-world
  • Success: instructor’s expectation (rubric), challenging enough, feedback. confidence and satisfaction?
  • Interest: to keep students’ attention use diverse instructional strategies
  • Caring: let them know instructors care about students
    • For my understanding, I was trying to compare MUSIC model with ARCS model (Attention Relevance Confidence and Satisfaction) that I am using for the grant, having eMpowerment is a unique element in MUSIC model.
    • I had a chance to talk with Dr. Jones (brettjones@vt.edu) and could hear some answers for my confusion on motivation. When exploring motivation, we don’t need to use all of them; sometimes we will need to use only 2-3 elements of motivation or engagement. Just choose focus area that I would like to make changes. Engagement vs. motivation: pretty much similar topic but just different terms; He has published a paper with Engineering people regarding motivation. Email him to request for the information. He helped me to link motivation theories to STEM.
    • MUSIC model explains the model as an instructor’s point of view; he said, “All five components are students’ perceptions”<– I think this is very important point when applying to learning environment. Because if students do not perceive the instructors’ change, instructional strategies are useless.
    • 6 Likert-scale: Strongly D, Disagree, Somewhat D, Somewhat A, Agree, Strongly A
    • It seems like MUSIC model is similar to self-determination theory. How is it different? Probably based on self-determination theory?
    • Jones, B. D. (2009). Motivating students to engage in learning: The MUSIC Model of Academic Motivation. International Journal of Teaching and Learning in Higher Education, 21(2), 272-285.
    • http://www.theMUSICmodel.com
    • Survey: does it come with crosschecking? or only by students?: only by students. MUSIC instrument has not been cross validated yet. It would be really interesting to have that data in the future.

4. Investigating links between motivation, student learning strategies, and achievement on high-stakes tests in large undergraduate courses (Jacob Grohs jrgrohs@vt.edu, Glenda Young glenda87@vt.edu)

  • Cluster analysis: grouping students into failing, weak, improving, fast improving, and strong based on their test scores changes from test 1 to 5–> check it. It will be useful for me to tracking students’ learning progress changes. good for checking patterns of behaviors. (contact Glenda for this analysis)
  • Their prelim findings with hours per week, failing groups did not put much hours for peer work but outperform in individual problem solving task.Hours per week was checked by students (self-report)
  • The most interesting presentation/research so far in this conference.
THOUGHTS
  • For team-based learning, students’ motivation level for the course affect on team/learning participation. It looks like when having motivated students(law school, med school, major course), team-based learning worked smoother than courses for non majors or elective courses.

It is very refreshing to hear their research presentation without PPT slides.

Changing personality after checking multi-dimensional survey results?–> I think someone’s personalities are changing throughout their lives..but, it would be meaningful for college students to provide opportunities to reflect/self-diagnose their personalities: http://www.personalitypad.org/