Thanks for supporting the community Kurly.
Marti_Bros, welcome to the forums. It goes without saying that if you receive a notice that sounds objectively contradictory to the actual situation it’s supposed to describe, it’s always possible there was a mistake. Submission error margins are actually quite low but we do see thousands of submissions so they happen once in a while!
Have You sent a support ticket yet? If not, please feel free to do so at the help center https://help.market.envato.com/ and make sure you link back to this post.
Confusion can happen sometimes, it’s true. Sometimes an author includes different content than they think they sent, without knowing, or sometimes an error happens in processing.
We’ll have a look and figure it out either way for you.
Hi again everyone,
Glad this topic had a positive impact and resonated well with so many of you. All the feedback is very much appreciated by everyone, and just wanted to answer a few of the standout questions and comments here. Hope everything is more clarified now.
Hello, I don’t know if this has been covered before, but I’m wondering if the reviewers work independently in their own studios (using their own monitoring setups) or is there a central office at which they all work?
Reviewers are spread out all over the globe, but they also work together in realtime.
But given how successful Envato has been I would love to see you add reviewers so borderline rejects would always be reviewed by a second person without that second person knowing it was already rejected. And it does not have to impact the queue if you simply absorb the additional cost of additional reviewers out of your ample profits!
Also, add a tiny bit of time on each reject to tell the author why the item is being rejected. Again, this will not slow down the queue if you simply add reviewers and absorb the additional cost out of profits! If a reviewer goes through a process to reject why can they not have a simple but complete form to check off the criteria for the author? Yes, I know it is not your job to teach, blah, blah. But these authors provide the means for Envato to succeed! They deserve the respect of knowing why their submission was rejected in my opinion!Obviously you do not have to do either of these things because you are successful enough as it is. But these things would not cost a lot and would greatly improve the experience for your suppliers and the community in general!
That is asking for the company to make a very substantial investment of time and resources. It would be a lot more than you might think. Keep in mind that this used to be the approach in the past. Even though Hard Rejections have no “intrinsic” value, as they do not exist as items in the library, and cannot be sold, they used to get a lot of very specific feedback to attempt to steer or guide the rejected tracks into something more salvageable.
However, over the years, careful analysis has shown that the vast majority of hard rejected items, even when they receive substantial and detailed feedback are exceedingly less likely and unable to be reworked into an acceptance. This isn’t an anecdotal speculation. It’s what really happened over several years. So this is why today we have to reserve feedback for soft rejections only.
Maybe the best thing to do Adrien is implement a new rule…any threads that are clearly a complaint against a hard rejection will be deleted or locked…just like self promotion or spam.
Well, at the outset we don’t really want to muzzle the community’s freedom to discuss item rejection topics so long as ideas not are brought forth in a discourteous or unmannerly way. We also appreciate to be able to monitor the discussions, as it helps us measure the quality of the work we’re doing. It would be a bit heavy-handed to formally forbid discussion of rejections, as MusicBox mentioned because of the values Envato strives to hold at its core. Whether or not any rejection chatter impacts the impressions people get of authors questioning or contesting their submission results, it’s probably something that can remain in the item discussion for now.
But yeah if posts do get out of hand, they’re certainly going to be addressed, case by case or otherwise.
Great info,it’s nice to know about the borderline things. So I think it’s still matter of luck if we just got a middle quality track
Mostly no, but a little bit of yes. That’s where the potential inconsistency stems from, inherently. As Andy mentioned, reviewers are not rolling dice. There is a concerted effort to minimize subjective decisions in the review process. But if a track is very, very borderline, i.e. really right in that theoretical middle area, then it can be either an acceptance or a rejection, potentially, depending on various factors. If it gets rejected, the types/number of issues that make it borderline will determine if it is a soft or a hard rejection.
Why are tracks in the grey or borderline “hard rejected” when a soft rejection sounds like it could be more appropriate? If a track could go either way, wouldn’t it be more appropriate to soft reject with a little detail to improve the track?
As above, it depends on the type/number of issues. E.g. If there are numerous smaller or subtle but important issues on a borderline track, deemed too diverse to explain individually, or due to considerable details needed to be conveyed separately, then it will more likely be hard rejected. On the other hand, a borderline track that has a single major issue, for example, it would be more prone to be sorted with a single directive, reliably, so that would be more likely to be soft rejected.
When are we going to talk about the fact that non exclusive authors have no chance of getting trending or featured? That bothers me a lot more than hard reject from time to time..
That’s outside the scope of this post, but it’s the company’s prerogative to moreso feature content that is not found on other competing libraries or licensable elsewhere online.
So many words.There must be also a result, I hope.
We all hope so. The truth is, it also depends on you.
Yes, but will the item price increase follow? Good quality costs a lot of money for both this marketplace (more rejected items) and the authors (higher chance of rejected items). So, I think the item price increase is justified.
That’s a notion that may have conceptual merit, but it’s outside the capacity of Quality & Review Teams to comment or speculate on the matter, at this point.
My tracks are not borderline, they are histrionic. SCNR
You win, Lemega, you win.
The subject of Review Consistency regularly comes up on the boards again, so we felt the time was right for the matter to be addressed, for what it’s worth.
Based on the astute observations of some of our long time members (ahem.. privet, Kurly), we can either deny the existence of inconsistency outright, stonewalling, or treat it as a reality that needs to be managed in an ongoing way.
But we’ll say it again, we care about what you think, so let’s begin by admitting the truth.
At the end of the day, it would be silly to steadfastly proclaim that inconsistency never exists here.
That would be simply untrue.
Absolute consistency is an abstract concept that applies only in theory, not in practice, so long as people are making decisions in the complex world of appraising the objective commercial potential of art – a subjective medium by definition.
So there. We can say this very, very clearly. Inconsistency unquestionably exists in some areas of the review process, and always will. In fact, believe it or not it’s something that needs to be managed on a regular, near daily basis. It’s not something we hide from, internally, as all reviewers know. But do we owe it to you to tell you a little bit more about it? That’s a debatable point, for a business, but I’d like to think it might help any annoyed people understand what happens, and why, even if it will not change the outcome much soon.
At this point, the most important question to answer, to deal with managing inconsistency is – How does it manifest itself, and why?
Let’s dig a little deeper to get to the core of the issue, to see what we can learn.
I. Team Inconsistency
First off, we currently have a team with 11 awesome individuals listening to submissions. The people reviewing your tracks are ALL well-meaning and well intentioned, driven by a sense of responsibility toward the quality of the library and an honourable respect toward the community of authors. If anyone has any doubts about this, or feels there is any conflict of interest or personal vendetta, the fact of the matter is you don’t know each person on our team well enough, simply put.
Please keep in mind that we do have a system that aims to minimize inconsistency on a daily basis. However we will never be able to eradicate it outright, so long as people, especially groups, are deciding things
What we can tell you for starters is that inconsistency is practically non-existent in the extremes. Even with 11 people. That means the very best and the very worst submissions. In the absence of strong misrepresentation indicators, stellar submissions are practically always accepted, and categorically inadequate ones are rejected. We can agree on this, yes?
Now everyone can hear how great some of the best tracks we receive sound, but unfortunately the very worst of the lot never see the light of day. And we get a lot of of both. We’d really love to give you examples of the “best of the worst” we hear, for your elucidation, if anything, but for privacy reasons we simply can’t. But we have a feeling most of you wouldn’t believe your ears.
If you can picture as an example, a track that sounds like a locomotive having intimate relations with a chainsaw while a music box sample plays in the background in a completely different key, with birds singing here and there, and random atonal electronic vocal accents… Well, you get the picture. Very creative perhaps, but AudioJungle is not the right library for that!
Forgive the digression.. Back to the point.
The biggest incidence of inconsistency happens with borderline tracks that are submitted to us.
Wait.. What? What is a borderline track??
We thought you’d ask. To answer that, first we need to understand what “borderline” means, as far as we’re concerned.
Here’s the thing – “Borderline” is not a line at all. It is a Spectrum
In this sense, a borderline track is one that falls somewhere closer to the middle. A borderline track does not stand out as being exceedingly flawlessly executed and/or with commercial potential, for any number of composition, arrangement, or production reasons, but its flaws do not immediately make it sound completely or more extremely lacking, for general commercial purposes.
Essentially, that’s where we’ll experience the most inconsistency when we review. In that middle third of the gray zone. And the closer to the middle gray a track falls, the more likely it can be inconsistently reviewed.
But How does that happen? And why?
If a track is very borderline, the bottom line is it can go both ways, theoretically, and that also may depend on a number of factors..
As a human reviewer, previously heard content may variably influence a borderline decision regardless of the training and experience. It is the human nature of habituation.
That’s what can make a “3” or “4” grayscale feel like a “5” or “6” grayscale, or the other way around
Imagine you heard several great tracks in a row right before hearing a more average one.. The borderline WILL sound worse to your ears. The middle spectrum can shift away from you.
Yet If you heard several entirely unacceptable submissions immediately first, the borderline may get approved. The shift may also happen in the other direction
Throw in a bunch of different people in the mix at different points in their shift, and you get an idea of how things can get hazy. And yet there is strong training to remain focused and disciplined. In many cases the second reviewer may choose to defer to the first one, but this is not always possible obviously.
Now if a reviewer accepts a track that could arguably be rejected, (barring system errors) it’s significantly more often than not a borderline track, that may or may not add value to the library.
Vice versa, if a reviewer Hard Rejects a track that could arguably be accepted, well, it boils down to the same thing.
Ok, so how come my track was accepted when I resubmitted With NO changes??
Before saying anything else on this, we will just clarify – If you resubmit a Hard Rejected track without making any changes, (1) You are breaking the rules and (2) You are taking a risk that grows each time you repeat this.
Let’s assume you throw caution to the wind. If that rejected track is resubmitted with no changes, several things can happen.
1 – Most likely, the track will simply be rejected again, if it was on the lesser borderline side, objectively and subjectively.
2 – If it is matched by the system as a resubmission with no changes, it may or may not be reviewed again by another reviewer. The account may get flagged in our database either way.
3 – Less likely, statistically, if it is determined that the item was acceptable on the second pass, it may get accepted. In this case the first reviewer may get a notice and a consistency advisement may be dispatched to the team for ongoing training. The account may still be flagged for resubmitting hard rejected content
4 – Also less likely, if the rejection is accepted the second time, but the second reviewer accepted incorrectly, the second reviewer may get the notice and a consistency advisement may also be dispatched to team for training purposes.
5 – Believe it or not, we also have instances where a borderline track that should be rejected gets accepted on the First pass, and it only gets noticed when an update is submitted, or otherwise. While this exemplifies and publicizes the notion of inconsistency when held under scrutiny, and we know people are watching, we won’t usually overturn those if there wasn’t a legitimate processing error, even if it creates the impression of more inconsistency! But the reviewer and team will still receive private advisory and notes for training.
It really needs to be understood how much awareness there is on the matter, and there are always ongoing efforts to counter significant imbalance in consistency.
To say that many rejected tracks are accepted when they are resubmitted unchanged is a relative statement. While the number is obviously not zero, it is a very minor proportion when looking at percentages. That much we can tell you without a doubt. So while the statement is not wholly untrue, it is really a much more statistically minor fraction, albeit one that can attract visibility we readily admit.
Ultimately, Resubmitted Hard rejections will be usually hard rejected again when observed, accounts flagged, and if an emergent pattern of the practice is verified, whereby it is becomes evident that there is a habit of resubmitting hard rejected content without any changes or reasonable communications to the team, the account’s upload submission rights will be revoked, pending further communication via support channels. We don’t really have much of a choice when comms are repeatedly ignored.
Bottom line, do the referees see each and every foul that happens on the football field? You know the answer answer to this.
But when a pattern of infringement is verified, well, we have to have yellow and red cards too. That’s just the way it is. So please govern yourselves accordingly.
The regrettable part, in all honesty, is that resubmitting rejected tracks with no changes, accepted or not, does nothing to improve one’s skill at audio production. It just demonstrates a conceivable short-sightedness, where being accepted is more valued than producing content that could potentially sell more. Way more often than not, it devalues both the library and the portfolio, with a borderline content that is less likely to get sales AND make the rest of the portfolio less likely to be browsed further too. Please think about that, people.
And that’s why we have to draw the line sometimes.
II. Inconsistency over time
A separate issue, still worth mentioning while we’re on the topic, is the appearance of inconsistency of the approval standard as the library has evolved over the years.
We won’t delve as deeply, but the truth of the matter is “Acceptance standards have continued to evolve over the months and years, so something that may have been previously accepted in the past would not be accepted today.” Yep, that’s coming straight from the textbook.
This is why it is incorrect to make assumptions or comparisons with any items that were accepted 1-2 or more years ago, when considering submitting content today. Doing so could definitely engender a strong evidence of inconsistency. But it does not reasonably apply today. Apples with apples, oranges with oranges, please.
Is there a plan for this subset of the issue in the future? Yes, but this post is not the place to expound upon that. Something to discuss later this year perhaps
How can inconsistency be reduced?
Finally, winding to a conclusion, we can share a last few important points. We already do have consistency measures in place. They can never be perfect but the fact that they exist validates the awareness of the issue, internally.
In the queue, many of the items that we place on hold are ones that go through a second review without you realizing it. However, for efficiency reasons it’s simply not possible to assign 2 or more people to each submission received. We also cannot create multiple filtering systems now without Hugely impacting the length of queue. And then we’d be lambasted for that right?
But we do encourage our team to honestly seek another ear when doubts are felt. This minimizes inconsistency in many ways more than immediately realized.
We also scan the rejection forum threads with Community Team, and take appropriate action when warranted. Envato does believe in the value of “Fair Go”, as much as possible, after all.
But sometimes, to reduce inconstancy in the future, and as authors, what’s best called for is some really brutal self-honesty. When we ask ourselves: “Does my work stack up? Am I in any way closer to the borderline than I think I am?”
This is a question for all authors, and also for reviewers.
Anyone can always look for any ways that their work might be less adequate, or where any improvements are possible, but unfortunately as individuals we are all potentially prone to getting hindered, without even knowing it, by what’s known as the Dunning-Kruger Effect. It’s not a plugin. It’s an inescapable quandary of consciousness that can creep up in individuals of all levels. But at the end of the line, through persistent trials and tribulations, many believe it’s something anybody can surmount if there’s an open and honest will to do so, when looking inward upon oneself. And when each next level is reached, a veil is lifted and we can often see what we simply were not able to be aware of before. Because we are all sometimes blind to ourselves in the present.
If anyone can sincerely and introspectively admit to themselves that their submissions may be more so classified as borderline work, and they wish to minimize the experience of inconsistency, for themselves, they can either make efforts to improve their craft with time, patience and persistent dedication, growing as an artist, or face the fact that they will inevitably be subject to the inherent reality of inadvertent inconsistency in the review process of borderline submissions.
The review team is striving to do the same too, every day.
In any field, the other regrettable option is to remain annoyed or embittered and brazenly point fingers at the world for being what it is. But that doesn’t change it alone. Never will. It’s like cursing the ocean for making you wet each time you try to swim in it.
As the old saying goes, “if you’re not part of the solution, you’re part of the problem”.
While we cannot explain to all authors every single dynamic of the backend review system and expect them to suggest realistically deployable methodologies, part the solution can start with oneself. The only part we can control for ourselves.
By making every effort and using every avenue available to learn and get a bit better each day, the measure of inconsistency will be narrowed as two groups take steps toward one another, author and reviewer. But each group can only reach a midway point, so the actual measure between the two parties is dependent on the steps both parties take.
That leads to progress, and progresses beyond the realm of the borderline, and That, regardless of any undeniable inconsistency or imperfection of the process, will undoubtedly make the results more consistent because rejections will gradually occur less and less.
You can take it or you can leave it, but that’s how it happens here.
Have a great Sunday, forum friends, thanks for taking the time.
Thx Adrien for reply. We have seen great improvements these days on AJ that we never thought to happen So you can lock the thread I think.
Yes, there have been some very encouraging developments lately, haven’t there?
That said, I don’t think we should lock this thread, because it’s important to keep the visibility of a distinct improvement people want so much, for so long.
Even if it takes weeks or months more to ascertain if this is even practically possible, each +1 validates the sentiment that it’s something that may have real intrinsic value for everyone, authors, buyers and Envato.
So we can leave the thread open, to keep as a testament for for all community and staff, as we keep going forward this year.
And you can all keep +1’ing if you like.
10 pages without a reply from Staff, still going after 10 months?? Something must be the matter indeed, on the 10th of the month today.
Guys, it’s very clear that so many people want this.
In fact, some Staff have been considering the logistics of initiating something along these lines for a little while already. With so many important things happening at once, in the whole of the company, not just AJ, let’s hope it’s understandable that we’d all rather avoid having false hopes, realistically.
Admittedly, what takes the most time is not so much tracking talent, but instantiating a different, more flexible delivery foundation for it, and all the details around that, while everything else needs to get done too.
So.. the situation at this point, regarding “Reviewer’s Hot Picks”, is that promises can’t be made as to when or how, or ever, but yes – it would be nice to be able to have more revolving features for separate categories.
I’ve got a question. What if there’s a music track somewhere in between, let’s say 50 sound effects. Will it stay in the queue? Because it’s nice to have front page exposure for music tracks.
Hi Lokohighman, Music tracks are subject to an entirely different process. This is only for sound items, so music items in between sounds would be processed independently in such cases.
Hi Dreamkid, quick note here, thanks for posting. We see your ticket is assigned so you’ll get a reply shortly. Market queues are definitely independent. If you didn’t get a notice from the VH Team then it sounds like something unusual happened, especially if you were resubmitting soft rejections.
We’ll make sure the weekend team gets your reply expedited.
Quality Team has just reviewed the rejections you have on your internal record and while most were verified as not being eligible for acceptance, you are correct that there was an issue with processing.
It’s apparent that there is a case here to overturn at least one of the rejections you’ve received.
Please open a support ticket at firstname.lastname@example.org and make sure you link to this forum thread and we’ll be able to review the case further for you. We’ll also communicate this finding back to the review team, you can rest assured of that.
Thanks for speaking up here, the feedback is appreciated