lutch saidAre you talking about specific quality tools or is their a plugin called quality tools? If you’re talking about certain quality tools are there any examples?
Use quality tools and everything will be fine . Good luck to you .
It’s a language translation thing.
Tools means instruments in russian.
Possibly two things have happened; either they hired an overzealous reviewer (or two) who rejects 99% of tracks on their shifts, or the review team is being pushed to make snap-decisions.
Nope, neither of those things happened.
Regarding the figures, we can only take your proposed estimation as an intended exaggeration, otherwise we can assure you that if any reviewer were anomalously rejecting literally 99% of submissions they process, or even close to that much, it would be noticed pretty quickly in the stats charts and genuinely addressed by the company. Submissions, acceptance and rejections are percentages, and well monitored, per reviewer, just so you know.
When I took a hiatus a few months back, the review time was around 7 days now it’s 16-24 hours, obviously something has changed in terms of the review process.
Does anyone recall the general sentiment when the queues were at 7+ days? If you do, please raise your right hand.
Yes, there have been evident changes in review, that perception is spot on. Specifically, the number one author demand over the past year has been queue wait times, which have been prioritized as the team expanded to meet capacity, and not at the expense of quality.
So now we’re at around a day’s wait for the majority (something many thought we would never do) and the track record for error margin monitoring has stayed indicatively flatlined. And, we still respond to case by case rejection inquiries via support in much the same way.
I hope they address this issue soon, and/or layout a DETAILED description of what they expect production wise.
That was done 3 months ago. On this thread which clarified a lot and was well received. It should probably be stickied again I think.
If the issue is that the review team is being pushed too hard, I’d prefer they go back to a week or more to review a track so they can take their time, rather than having all these great tracks flushed down the toilet.
Do people really want to wait a week again? It’s certainly not the general feedback we’ve gotten so far. Adam, I have to say that your opinion is respectable in essence, but it’s it’s been demonstrated that applying brakes and bringing queues back to 7 days would not solve the perceived inconsistency issue that’s being described here.
Or possibly AudioJungle is reaching a critical-mass where they feel like they already have more than enough music or contributors, like some of the other stock music sites have. I guess all we can do is speculate until someone from Envato actually decides to weigh-in on this issue. Maybe we are all being paranoid, but I don’t think so, I think you can definitely feel this change, whatever it is.
The notion of reviewers rejecting for competitive reasons is, well, simply an erroneous and inaccurate imagination. It is silly and borders on conspiracy. Not all reviewers are authors, and of those who have active author accounts, the majority aren’t authoring actively.
Regardless, every single person on the team has been hired based on merit and especially trustworthiness and integrity. The entire team works together, with shared reviews on numerous item cases before they get rejected.
One thing is for darned sure, every single reviewer hates and regrets making a slip. But we admit our slips internally to the whole team and everyone is made aware them when they happen. I can tell you personally that each reviewer really takes it to heart to avoid making outright review errors.
As far as relating feedback to every single rejection report on the forum, it’s not possible to respond to each one. That is not the pattern we are ever going to establish.
The community discussion and dialogue is always welcome, in any event.
It may seem to you now that “inexplicable” rejections are on the rise, but that may only be a imprecise manifestation emerging due to more people taking advantage of the forum for item discussion. Our recent initiative for more transparency and public rejection address, when warranted, could easily have led toward the assumption that we were meant to give feedback on every thread.
Unfortunately that’s not possible and was disclosed as such. Yet people seem to be taking the chance to post rejections more. That in itself is a good thing, because it stimulates the discussion, allows us to monitor the pulse of the review process from both within and without.
Questioned rejections on the forum, however, are evaluated by both quality management and review staff, and when definitive discrepancies are noted that indicate an error, then the case can certainly be revisited.
That said, the initial item in this thread is not one that meets current acceptance. We know, due to the feedback we got, that a lot of musicians here can perfectly understand why.
Despite this, we’re still looking at ways the system’s Hard Rejection messages can be better conveyed. One worthy idea was to include information inviting item discussion, which was Lemega’s idea.
Ultimately, we’re never going to be claim to be perfect, which is why we’re interested in hearing the feedback, because when we can see that a call falls on the wrong side of the fence, we have no problem admitting it and overturning a rejection.
Case in point – Marb… The bar has been raised, yes indeed. And it was raised to much acclaim and satisfaction by a vocal majority.
With that out of the way, if the echo of your bellow has rung out, we can candidly advise you that your last rejection was noted here, and is in fact one that was admissible. Check your inbox shortly.
Please note, everyone, a single overturned rejection does not a million mistakes make. That would simply be misconceived conjecture. Ultimately, it’s just a matter of trust that the reviewers, many of whom have been empathetic supporters of yours on the boards here, who are also former author colleagues of yours, have it in their minds and in their hearts to do the best job possible as a team. If it’s corruption you think you’ll find, you are 100% looking in the wrong place.
For the record, the team averages are pretty similar in terms of rejection numbers, and the fact is that rejections have actually gone down in recent times.
And guess what, in the very same way that exceptions happen, when an item gets rejected that should not be, there are also exceptions when are item gets accepted that should arguably not have been.
This can impel an observer to conclude that consistency is completely way off, but in fact it’s not because the exceptions are statistically less significant than perceived, objectively.
Because we acknowledge this fact, one of our planned initiatives is to gradually revisit the library’s content, in due time, according to a very specific set of parameters – with the intent of aligning the layers of consistency we’ve had to work with over the years. And that means all of our older portfolios, including all staff’s .
Those past tracks which do not meet current acceptance standards may be revisited and withdrawn from our library in the future.
And no, before anyone gets alarmed or goes off the bend here, that does not mean there’s going to be a free-for-all retroactive rejection campaign.
It simply means that older content which clearly and categorically cannot justifiably remain active by more recent standards, objectively, may get hidden (with a possible option for update), to narrow the gap in consistency that’s accumulated over the years, and further align items that were processed up until today.
Phew. Hope that makes some sense and resonates well enough with folks. If you’ve managed to read through this whole post, thanks for taking the time! Otherwise, let’s back to work and we’ll keep you posted with further updates as they evolve.
Peace out everyone.
Hey guys, It would’t be AudioJungle if it didn’t look like a jungle sometimes, would it!
It appears this confusion has been multiplied because of a couple errors on our parts.
For starters, while the KB terms briefly touch on the notion of subjective titles, that article was written a long time ago and is meant to be interpreted as a general guideline for extreme cases, for all marketplaces. However, due to the nature of distinct media and library indexing practices, each marketplace has evolved to approach the policy slightly differently, and this specific point does not come across in the article’s present form.
Focusing on AJ, to be clear, we interpret the subjectivity of a title in a way that discourages superlative subjective titles.
This means that titles such as “Best Track of 2014”, “The Most Amazing Music Ever”, or “Finest Production in the World”, as examples, should not be accepted by the review team, and alternatives should be requested, suggested, or temporarily set by reviewers for such cases.
However, to be clear, we are not imposing a ban on any specific words per se. (well, except words you don’t tell your mother )
So ultimately, as Phil noted, we can consider the following titles as acceptable, using the terms above:
1. Best Regards, Best Friends Forever, Best Success. OK.
2. Amazing Inspiration, Amazing Days, and yes.. Amazing Grace. OK.
3. Finest Diamonds, Beautiful Worlds, Inspiring Ideas, Ultimate Fight, Epic Glory… Etc etc etc.. OK.
In this sense, admittedly,the rule was essentially followed too strictly by the review team when we asked you to change “Amazing Technology”, Lumen. That title is considered acceptable, and we can change it back for you if you want. No harm done, we can relax.
In conclusion now, what are the lessons here, and next steps?
1. We are going to reclarify this specific aspect of titling with the whole review team immediately.
2. We are already revisiting the KnowledgeBase content in this capacity, so it gets clarified on the next update deploy. (Currently a project in the works, slated for the coming weeks)
3. We are going to align our community and review teams further on the matter to ensure the details of each marketplace are taken into account before any subsequent announcements are made.
4. We’ll welcome any further reports of exceptions or cases where titles or reviews are genuinely incompatible with the information stated above, and we can address concerns on a case by case basis via support.
Our goal is to broaden and reinforce consistency here, where there’s potential for pitfalls, so please send any questions on this topic our way if you are unsure about a title of yours.
Also you are always free to add comments in the Notes to Reviewer field, when you submit an item or update to the queue, if you need clarification.
Thanks everyone for your attention and understanding here, and bona fide apologies for the hiccups.
We’ll edit the item links out now, and unlock the thread now to allow the conversation to continue – as long as it stays on topic and doesn’t descend into proverbial anarchy, s’il vous plait.
It’s true that if there is a trail of perceivable silence (at normal volumes), or the very tail end of a note ringing out that barely crosses the 2:00 mark, a submission will get priced as 1:01 -2:00. The same goes for other price points.
That said, in your case, StarDiva, the main version is clearly sounding after 2:00 still, so it appears a system error occurred. It’s been fixed now.
Otherwise, please note, importantly, that in order for your update notes to ever be seen by anyone on the review team, you need to actually attach a file, or make a small tag or title edit. Otherwise if you only enter notes, as you’ve done now, the update is processed automatically by the system, and will not be seen by any staff in the queue.
Thanks for your attention here, Let us know if you have any questions.
It’s ultimately not critical to change the filenames in the zip. If you wish to do so, however, you’ll need to submit updated zips via each item’s edit page. Otherwise, you can just edit the titles in the title field on the update page without attaching zips and it won’t affect the processing.
Bottom line, a track cannot be in more than one pack at any given time.
So technically you have two options:
1. You are free to delete the pack if you wish to reorganize those tracks into other packs. Though you lose the sales history and those packs won’t be reinstated.
2. You may also update an existing pack to remove a track, if you wish to include that track in a another pack. The updated pack will see price adjusted, and that step is required first before the new pack is submitted or updated
NOTE: What you are Not allowed to do, however, is take advantage of the update system to continuously update pack content to swap new tracks into an existing pack and promote that as regular content updates to buyer. The systems logs these updates too and that can lead to account disruptions, naturally.
Any further questions, shoot, or send a support ticket in.
I wanted to call this one “Cruising the Information Superhighway”; the reviewer made me change it (which I found rather odd…) http://audiojungle.net/item/the-future-looks-bright/8815341
That title could have been accepted. We’ll get the team aligned on that detail. While I personally think the current title is more standardized and a good one for stock, If you’d like your original idea to stand, we can gladly fix that.
I have no idea why this changes were not translated clearly to reviewers team. Simple math – (18+18+18)/2=27 not 25.
Lumen, that is not the formula used used for music pack pricing.
Either way, The issue was already raised with the strategy and growth teams, and is being addressed shortly. Item updates are being held until the matter is resolved, which should not be very long
Also, very important to understand – Soft Rejection is NOT a guarantee of acceptance when resubmitted.
Soft Rejection is an opportunity to rework aspects of the submission based on a reviewer’s encouraged suggestions.
If a resubmission still does not meet the requirements of the AudioJungle library, it may naturally not be accepted. It may not be continuously bounced back and forth repeatedly. What other Stock Audio library does this, sincerely?
And for the record, to quell the inherent speculation here, The item was not processed by two different reviewers.
In this case given the nature of the initial soft rejection, the resubmission was held by the team specifically so that the original reviewer could reevaluate, being already familiarized with the item record, and for all intents and purposes, as far as the current quality standards go, did not make an incorrect decision.
Thanks everyone for your attention and understanding