January 31st has come! We’ve gone through all the new submissions and returner pitches for She Wears the Midnight Crown and He Bears the Cape of Stars.
The authors who have previously worked with Duck Prints Press and who applied to be part of these anthologies have already been informed of our decisions. Because the pitches were relatively short, and there weren’t that many returner applications (28 applications for 16 slots – 8 slots per anthology), we tackled rating them first. All the returner pitches were phenomenal; choosing was really, really hard, but still far more quickly done than going through the new applications.
Going through the 76 applications we received from people who haven’t written with us previously was a much more involved, since all told the submissions amounted to approximately 150,000 words of fiction and story pitches to read. We’ve finished, and we can’t wait to contact everyone. However, before that, we wanted to put up a post explaining a little more about the process, to preemptively answer some of the questions we received last time after acceptances and rejection letters were sent out.
How were people rated?
Every story was read by three reviewers, who scored it using the rubric previously shared on our website (here). Each reviewer scored the authors on a scale from 0 (…no one was close to getting a 0) to 29 (…no one was close to a 29, either).
To ensure fairness, all scores were standardized with a simple statistical model. Basically: each reviewer used the rubric differently, and if we just compared “raw” scores, it would be unfair to people who got “harsher” reviewers (those who, on average, scored all their reviewed submissions lower) and over-weight people who got “more lenient” reviewers (those who, on average, scored all their reviewed submissions higher). To account for this, for each individual reviewer, we did the following:
1. Averaged all their rubric scores.
2. Calculated the standard deviation for all their rubric scores.
3. Ran the “standardize” function on each individual score.
What this does is take a raw score (say, 10, or 20) and re-calibrate it to a new standardized number where for any given reviewer, their “average” would have a score of 0. Their highest rated would have an adjusted positive rating based on their standard deviation (most of ours cap out around 2 – so the highest-rated fics have a standardized score around 2), and their lowest rated would have an adjusted negative rating also still based on their standard deviation (most of ours bottom out around -2).
Doing this enables us to compare apples to apples, because now ALL the rubric ratings are scored as if the reviewer’s average was a 0, instead of us dealing with the problem where Reviewer A’s average rating was a 15, Reviewer B’s a 19, Reviewer C’s a 10, etc.
Okay awesome but why are you inundating us with math?
We share the math on the back end because, whether we accepted or rejected you, you are invited and encouraged to request your rubrics from us (though note that not all of us used it the same way, and a lot of us were, uh, fairly casual? in how we wrote our comments). When you get the rubrics, if you compare them with friends who applied, it’s inevitable that someone is gonna notice that it looks like people with higher or lower scores didn’t end up distributed quite where they’d expect (e.g., someone with a lower raw score notices they were accepted while someone else with a higher raw score was not).
The statistical model above is why this happens. We have two readers who tend to rate fairly high on average (one is me, I’m unforth and if you request your rubrics, I’m Reader 1 for everyone, and I don’t mind sharing that information). We have two readers who tend to rate fairly low on average. We have one who rates fairly middle of the road. So imagine Applicant A got both the generous-with-points reviewers and the middle-of-the-road reviewers . Their rubrics are going to have pretty high point scores. Then, imagine Applicant B got the middle-of-the-road reviewer and the two stingy-with-points reviewers. Theirs is going to look like they did very poorly. But neither of those raw scores reflect reality – the person who got the highest point total on a “stingy reviewer” rubric might look like they did worse just based on the raw scores, but when statistically adjusted, the highest score from a “stingy” reviewer is worth the same amount as the highest score from a “generous” reviewer! So the highest score from a stingy reader might be a 15, and the highest score from a generous reader might be a 25; the standardization looks at the average these reviewers gave across all their rubrics, and enables us to “recognize” that that 15 and that 25 should be worth the same, and once the scores are standardized, both will be about the same.
Does that make sense?
I know it can be weird and confusing but trust me, it’s statistically sound. Or, don’t trust me – trust various statistical experts who say it’s the right way to handle this – for example, this one, or this one, or Wikipedia.
I’ll do my best to add standardized scores to the rubrics if you request them, so that any author can see both their raw score and the adjusted score we used for making our decisions. We are committed to transparency in our processes, so it’s important to us that people understand what we did, why we did it, why it was most fair, and how it impacted our selection.
How DID it impact your selection?
It’s pretty straight forward, really. Once scores were standardized, we averaged the three final scores, and then sorted the list from highest to lowest average. We accepted the people with the top ten average standardized scores for each anthology. Our final decision is entirely based on the numbers. We think this is most fair. Note, though, that “most fair” doesn’t equal “most objective.” There’s absolutely still subjective opinion involved – if you’ve looked at the linked rubric, subjective opinion is in fact hard-wired into our rubric, one of the ratings is “reader’s subjective reaction to the submission.” But, we use this method to help keep things fair and balanced and transparent, and we hope that it helps y’all to understand that you didn’t submit into a black box that takes in applications and spits out acceptance and rejection letters; we are always prepared to share the nuts-and-bolts of what’s inside the application “box.” It’s a transparent box, not a black one. 😀
Cool, got it. How will people be contacted?
As soon as this post is done and cross-posted, I’ll be sending out acceptance and rejection letters by e-mail from the duckprintspress at gmail dot com account.
1. Acceptance letters! We’ve selected 20 authors (ten per anthology) whose work really wowed us, and who received the highest average statistically standardized score on their rubrics.
2. Rejection letters! It’s a sad reality that we simply cannot accept everyone. We got almost 80 applications for 20 spots, so only 1 in 4 people can actually “make the cut.” Competition was fierce, and every single reviewer can point at a personal “fave” that didn’t end up making it. For both anthologies, the difference between 10th and 11th was only a few hundreths of a point. We saw a lot that really, really impressed us, and (as you’ll see in your letters) we strongly encourage everyone to continuing honing their skills and consider reapplying in the future.
Note that we’ve also decided to invite about a quarter of the people we rejected to our Discord server. These invites are issued based on a number of factors, and are entirely subjective – basically, once we’d gone through and knew who’d been accepted, we looked at who didn’t make it and used our editorial judgement to determine who we felt should be brought in. We’re sorry we can’t invite everyone, but…we can’t. We share that we’re inviting some, but not all, because, again – transparency.
I have a question that wasn’t addressed in this post, or I don’t understand something you said, or I want more information about point x, or…
Drop us an ask, DM us, leave a comment or e-mail us at duckprintspress at gmail dot com! We’ll do our best to explain.
*
Thank you all for applying. Reading your submissions was a delight. There was so much here that just blew our socks off, and we can’t wait to get to know folks better, whether they were accepted or not, invited to Discord or not.
Always remember that, at our core, Duck Prints Press is committed to the principal that we want to work with people who want to work with us. So, even if you didn’t make it this time – keep at it, apply again, and we would love to be able to invite you next time!!