In 2024-25, I had the honor of serving as chair for the Sociology of Education Association’s Annual Conference. I organized a convening under the theme Sociologists as Change Agents: What Can We Do About It? and was delighted to review the excellent précis submissions (with the support of other SEA board members, without whom there simply would not have been a conference). As someone who loves to open black boxes, I wanted to share a bit about my experience and offer some advice for future SEA hopefuls.
The Selection Process
Before offering some advice, I thought it might be helpful to hear a little bit more about the mechanics underlying the selection process. There were 109 submissions and only 35 presentation slots on the program. Here is how we chose which presentations to include:
- I aimed to facilitate a “double blind” review process. I randomly assigned précis to reviewers, taking care to ensure that reviewers did not receive their own. Reviewers did not see the authors or identifying information associated with the submission. I also asked reviewers to let me know if they had a conflict of interest with any of the précis they were assigned (for example, if it was obvious to them that their close friend or advisor had submitted it).
- Reviewers were asked to rate each submission on a scale of 1 to 5 for each of five criteria: fit with the conference theme; theoretical framework; data and methods; contribution; and importance, all on a scale of 1 (“very poor”) to 5 (“excellent”). A composite score would then take the average of these. Before they rated each proposal, reviewers were reminded of key inclusivity considerations developed by SEA’s DEI committee as well as a description of the evaluation criteria (including how they might be applied to a theory-focused proposal). In the spring conference, all proposals are evaluated the same; in the fall conference, however, there is a boost for first-time presenters and graduate students.
- After all reviewers had submitted their ratings, I weighted ratings according to reviewer patterns. For example, as a reviewer, my ratings of methods were slightly lower on average than those of other reviewers. I thus made slight adjustments to scores to ensure submissions assigned to particularly stringent reviews were not at a disadvantage. Then, since there were only 35 slots for presentations, I first looked at the top 35 submissions (which happened to correspond exactly to a cutpoint composite score of 4). Among these, there were two submissions that I did not think made sense for the conference. I thus went to the submissions with the next two highest composite scores.
- I tagged submissions (or coded them, if you will) with helpful details, considering subject area, whether k-12 or higher ed, unit of analysis, methodology employed, and keywords that came to mind when I read the précis. This helped me identify the themes for the resulting seven sessions (featuring five presentations each). In three cases, the submission did not quite fit within a session theme; I assigned these to posters. I then continued down the list (sorted by composite scores) to supplant these three spots.
Ultimately, I was satisfied with the selection process and pleased with the resulting program, even as I sent painful rejection emails to a close friend, a cherished advisor, and several members of the SEA board who had themselves helped in the review process. Among the presentations with the top five scores (all of which were included in paper sessions), the first authors included two graduate students, a postdoc, an assistant professor, and a full professor. Only one was from an SEA board member. Four of the five were scholars of color. None had been on the program in the year prior.
I was struck by several pieces of information throughout the review process:
1. The conference submissions were excellent.
There were 109 submissions that qualified for consideration in the review process and only 35 slots in the program for presentations. 87 submissions had composite scores of 3.00 or higher (with scores of 3.84 or higher represented in the final program). I cannot emphasize enough the quality of submissions that I was unable to include on the program. There were submissions I would have been delighted to see presented; submissions I was deeply curious about; submissions that would have informed my own work; submissions that, for the life of me, I could not figure out why they had been scored as they had. I could have curated a program nearly equal in quality if all of the papers that were ultimately included had never been submitted. The takeaway here is as it is in most of academia: You can do everything “right” and still not make the cut. This is not a reflection of you or your work in any way. It is only a reflection of the fact that we did not have enough spots to allot to all of the high-quality submissions.
2. SEA relies on the enthusiasm, talent, and generosity of junior scholars.
More than one-third of the proposals submitted had students listed as the lead author (with many more students represented among other author lists). This is more than for any other group.* In addition, SEA is heavily attended by students. Students submitted some of the most interesting, rigorous work; many senior faculty commented to me during the conference that they could not tell which presenters were students and which were faculty given how high-quality the students’ presentations were. We are doing something right if students can make such an impression on senior attendees, and I hope students continue their representation on the SEA board and continue making SEA an organization that meets their needs.
*The groups were as follows: students, postdocs, lecturers and teaching professors, assistant professors (including assistant research professors), associate professors, full professors, and institutional researchers (including those holding non-faculty research positions at colleges and universities).
3. The characteristics associated with inclusion in the program are indicative of institutional knowledge of academia and SEA.
Months after the conference took place, I became curious about whether there were features of submissions associated with inclusion on the program (or of receiving high composite scores, which in this case, was almost the same thing). I focused on inclusion in a paper session (operationalized as a binary indicator) and used logistic regression to identify whether specific characteristics predicted inclusion. Note that these are all simple regressions.
Here’s what did not seem to matter:
- The number of authors listed on the paper
- When the proposal was submitted (relative to the deadline)
- Whether any authors on the author team had been a presenter at the previous year’s SEA
- Whether the first author was affiliated with a private university
- Whether the first author was affiliated with a top 50 or 100 college or university
- Whether the first author was in a sociology department, education department, or another department (e.g., human ecology; public policy)
And here’s what did, in that précis were more likely to be included as a paper presentation:
- Whether an SEA board member was on the author team
- Whether the first author held a research faculty appointment (in contrast to students, institutional researchers, and lecturers)
My interpretation of this is not that there was covert bias underlying the selection of proposals—no one reviewed their own proposal, and the SEA board represents a diverse set of research interests and methodologies. However, this does seem to be indicative of homophily. Perhaps reviewers were subtly favoring work that is similar to other work they have seen presented at SEA or that they believe reflects the interests of SEA. SEA Board members also have special insight into the proposal writing and submission process; they know exactly the criteria on which proposals will be evaluated. While the criteria are published on the SEA website, board members may have known these criteria long before and themselves have likely had successful submissions on which to base new ones. Indeed, many members of the board are longtime SEA members, suggesting familiarity with the types of research often presented at SEA and deep familiarity with the proposal review and selection process.
A hypothesis for institutional knowledge is further supported by the positive association between inclusion on the program and being a research faculty member. There is, perhaps, a “know how” that being a research faculty member, and an SEA board member in particular, affords submitters.
To help address gaps in institutional knowledge between board members and the SEA community more broadly, I first wrote a series of Conference FAQs and asked permission to share submissions that had successfully gotten on the program in the prior year (the 2023-24 conference year). These are publicly available on the SEA website thanks to their generous authors. To continue this commitment to transparency and further open the black box, I have put together a list of suggestions for SEA hopefuls to improve their submissions for future annual meetings.
How can I improve my chances of being selected?
1. Take the evaluation criteria seriously.
Simply put, you can rely on the criteria for précis submissions. I know this seems silly, but submissions that clearly include and label these criteria leave less room for reviewers’ interpretations. Don’t forget about fit with the conference theme; it’s easy to overlook this if you’re recycling submissions from other years/conferences, but it is weighted just the same as the other criteria, such as the methods.
Theoretical papers have also been considered for and accepted to present at SEA, but admittedly, the lift is higher: There is more “burden of proof” for theoretical papers to show that it’s not just theory that you’re working with, but that you have a clear, logical, evidence-based rationale for your theory and that there are real implications for practice. Some reviewers also just don’t know how to review theoretical submissions, even with guidance.
2. Use the allotted space.
I did not conduct any NLP analyses to substantiate this point, but I’d be willing to bet that submissions that used their full two pages (single-spaced) had better scores on average than submissions that were significantly shorter. If you don’t have that much to say, that probably means you haven’t sufficiently covered the evaluation criteria. In addition, the two submissions that received relatively high scores but that I excluded from the program had written only a page and a half or less. This is not to say that such submissions would automatically be excluded, but I would recommend getting a peer to read your submission and offer insights on what it might be missing.
3. Be specific.
The most common comments that I noticed across reviewers’ feedback is a lack of specificity in the submission. For example, submissions might have included impressive data collection, but had vague or unspecified research questions; they had compelling findings but no theoretical framework; they had a clear setup but vague methods, with limited details on how variables were operationalized, how the sample was recruited, or the coding process. Reviewers often noted that specific implications or contributions were missing. If you’re pressed for space, identify sections to trim or consider if there are ways to incorporate tables or figures (not counted in the page count).
4. Have findings.
Submissions do not have to have findings. However, between two otherwise equally compelling submissions, the one with findings is more likely to be selected. These findings can be preliminary; you can hypothesize findings you might discover and what their implications are; your findings can change from the submission to the eventual presentation. But you should have something to say about the results of your analysis so you can actually speak to the contribution you’re making to the field as well as to the significance of your work. Reviewers also differentially evaluate precis with and without findings; some consider a detailed plan for analysis sufficient, while others note that this undermines the potential contribution of the work (since it’s impossible to truly articulate what the contribution will be). On the bright side, if you don’t have findings yet for the upcoming conference, your study will probably make a compelling submission for the next year.
5. Use spellcheck.
This is probably another silly suggestion, but many authors do not check the spelling in their submissions. Run your précis through Grammarly. This is true for everyone, including native English speakers, full professors, and grammar connoisseurs. Your ideas may be groundbreaking, your study field-defining—but that does not matter if your précis is not legible to your audience. Typos, unclear sentences, and poor word choice undermine even the most compelling of submissions. And that’s not the only angle to consider for readability. I would also suggest avoiding esoteric language for a similar reason: Your work must be understandable to people in other subfields to be included on the program.
This is not a complete list, of course. There are other things that will probably help. But that’s a start.
I wish you luck crafting your submissions for this year.
All my best,
Kaylee Matheny


