17

Update on June 16, 2025

This experiment is being extended through July 15th. Challenge 3 will be posted on June 17th, and challenge 4 will be posted on July 1st. All challenge posts can be seen at stackoverflow.com/beta/challenges, linked in the left navigation on Stack Overflow.


Update on June 4, 2025

Awards for the first coding challenge have been noted on the challenge post. Congrats to the winners, and thank you to all participants who submitted entries!


Update on June 3, 2025

The first coding challenge is now complete! Entries can still be posted and votes can still be cast, but they will not count towards determining the winners of the challenge. Winners will be announced on June 4.

Additionally, the second coding challenge is now live! Entries can be submitted until June 10th. Please add feedback about the second challenge on this post.


Update on May 30, 2025

The challenge entries are now visible to all. Take a look here to vote on existing entries or submit one of your own. This challenge is open to continue receiving entries until June 3rd.


Update on May 27, 2025

The first coding challenge is now live! See it and submit an entry here. The challenge is open to receive entries until June 3rd.


A few weeks ago a staff member posted on Meta Stack Overflow about an idea to bring coding challenges (‘StackQuest’) to Stack Overflow. Based on the feedback there we have decided to move forward with testing a modified version of this idea.

The changes we have made based on community feedback are:

  1. Participation will have no impact on rep
  2. The challenges will take place outside the main Q&A area

Our goal is to engage users with a new way to participate on the site (if you have ideas about how to make engaging on the Stack Exchange generally more fun for users, please share them on this separate MSE post).

There are lots of people who rely on Stack Overflow to solve problems, but have never asked or answered a question, or even registered an account. This is one new way we are looking to welcome those users into the community. The coding challenges on Stack Overflow will be distinct from Code Golf because they won’t be "optimization oriented", more on that below.

In order to test whether or not Stack Overflow users are interested in something like this on the platform, we will utilize the Discussions space to set up several coding challenges in the next few weeks. There will be a new “Challenges” link in the left navigation, leading users to the challenge posts.

During this time the Challenges space will temporarily replace the Discussions space. All existing Discussions will be hidden, and the ability to create new Discussion posts will be turned off. The test will last a few weeks, after which Discussions will go back to their current state.

How are Challenges gonna work?

Users will have one week to complete each challenge, and the second challenge will go up right after the first one concludes. There will be no rep requirement to participate (staff will be responsible for looking out for spam and generally moderating the challenge posts).

Each challenge will have multiple winners based on several different objective criteria that we are still determining (we are considering most upvotes received, most discussion generated, and more), as well as a “staff pick” where Stack Overflow staff developers can highlight an answer they found interesting or creative, that may not have met the objective criteria. Winners of each challenge will be highlighted on that post.

If we do build out this idea into a full-fledged feature in the future, it may end up looking and functioning differently than these test challenges. Specifically, we would add additional incentives for participation and winning (such as badges), as well as anti-plagiarism and anti-spam measures.

Success metrics

To measure success in this experiment we are going to evaluate several different metrics including the numbers of users that visit the challenge posts, submit an entry to one (or both) challenges, participate somehow (submit entry, vote, comment, etc.), as well as some qualitative feedback that we will collect.

About the challenges

For the initial tests we are creating coding challenges that fall within the following constraints:

  • Playful prompts rather than ones with a possible practical application
  • Challenges that can be technology-agnostic
  • Challenges that can be solved via a variety of methods (so they are interesting for both beginners and experts)
  • Challenges that have interesting, possibly shareable output

We will update this post to let you know when each coding challenge goes live. We're excited to launch this test next week and hope you will participate.

25
  • 21
    I'm very confused... Why are these challenges "replacing" discussions, for that duration? Are they reskinned discussions? Commented May 22 at 17:17
  • 3
    Why taint the new Challenges features by linking it to the discussions space? There has been very little love for that area and feature, so linking this new area to the old space may bring with it all the negative feelings which are held for discussions. Unless this is an attempt to pivot away from discussions altogether? Commented May 22 at 17:52
  • 17
    @Cerbrus Yes, for the purposes of this two week experiment we are utilizing code & site infrastructure from the Discussions space. We wanted to avoid creating a whole new content type for a test that might not be developed further. If we do decided to flesh out coding challenges on SO into a full fledged feature it would likely look and function very differently. but for this test we really just needed a vehicle to evaluate whether users are interested in this kind of thing. Commented May 22 at 17:53
  • 5
    @DanielBlack I hear your concern. The main reason we did it this way is that it offered a simple solution to have a place to conduct this experiment without building a whole new content type on SO. During the experiment it won't say "Discussions" anywhere on the page, or in the left navigation, and all past Discussion posts will be hidden, so hopefully the association does not deter people much from participation. Commented May 22 at 17:56
  • 4
    @M-- Yes, since this is using the Discussions infrastructure, the coding challenge responses will show there as replies. Commented May 22 at 18:39
  • 3
    "technology-agnostic". except for you, HTML. to heck with you in particular. Commented May 22 at 19:07
  • 5
    Ah, so I would not be able to participate, as when I write in my language of choice then all my code will be eaten because Discussions is a broken mess of a product. No thanks. Commented May 23 at 16:44
  • 1
    Would you have this featured? I think this meta.stackoverflow.com/q/433648 post has received enough attention after a month and more than 3 weeks of being featured. cc: @Berthold Commented May 23 at 17:23
  • 3
    @starball & Axeman we've fixed that bug and the fix will be live when the experiment launches. Commented May 23 at 20:22
  • 5
    I personally wouldn't consider discussions "core functionality". and they're not breaking it. they're just pausing regular usage and using it to do something else. Commented May 25 at 9:44
  • 2
    @NoDataDumpNoContribution i mean, they can't be received negatively, so thats a given Commented Jun 3 at 17:03
  • 2
    I don't really see what challenge 2 has to do with ciphers. it's really just an exercise or exploration of encoding techniques. not that that's not interesting (I for one enjoy thinking about encoding), but the association just feels unnecessary. though I guess designing an obtuse encoding is a fun challenge. Commented Jun 3 at 20:21
  • 1
    I am now curious, have those who design the challenges really read Things to avoid when writing challenges and got the spirit of it? Commented Jun 4 at 3:31
  • 1
    @WeijunZhou I wrote both of the challenges, and I did indeed read that very helpful guide before writing them! It's possible that they've appeared elsewhere before but the challenges as written were not based on other content. Commented Jun 4 at 13:43
  • 2
    @Marsroverr ask and ye shall receive: meta.stackoverflow.com/q/434158/21182738 Commented Jun 4 at 14:32

17 Answers 17

27

Now that the results are visible, I figure I might as well share some observations I had , not as a participant in the challenge but someone who came along to see what others had contributed.

1. The lack of down-vote is significant.

I was able to upvote the challenge responses that I felt deserved it. But I wasn't able to downvote the ones that I thought missed the point. Even if we don't count the downvotes (like we do with screenshot of the week contests in Arqade), the fact that I couldn't adequately vote between "good" (upvote), "average" (eg no vote), and "bad/missed the point" (downvote) was annoying.

2. If we're going to do popularity contests, don't use vote count to influence the display order.

There are a lot of responses. I started viewing with oldest-first order, read through two or three on my fifteen minute break, then gave up. The only reason I got to see the other 32 responses is because I resorted as highest vote count, laughed at the ascii art, and upvoted that.

This isn't fair.

  • Oldest first sort order directly feeds into the "fastest gun" issue.
  • Most votes sort order is good for Q&A under the theory that (hopefully) the highest voted answer is the most useful ... except Q&A allows downvotes. It's not 1:1. Here it just lets people pile-on to something that's already trending higher. (There's a name for this phenomenon but I don't recall what it is.)

If you're doing a popularity contest, you need to randomize the display order (but do it consistently; see #3 below) and also hide the existing vote counts.

3. There's too many entries.

It took me 15 minutes to read two or three entries. There are 35 right now. They're displayed in one long list with no paging. If I took my second 15 minute break of the day to go back and read more of the responses instead of writing up my thoughts, I'd have to do a bunch of scrolling to find where the heck I even was, hopefully not scroll past something that I hadn't read, and then have to remember what even I read before.

Some things that would help:

  • Paging. Show me five at a time. Note that this would require consistent ordering, so any randomization (see #2) would need to be internally consistent.
  • "See more" mechanics. Show the first x pixels of the response, and fold the rest behind a "see more". This would address the scroll fatigue.
  • Better differentiation between answers. As you scroll, it's very easy to miss where one answer ends and the next begins -- probably because we're bastardizing a comment feature and not an answer feature.

But at the end of the day, I'm not going to ever read all 35 entries. I admit it. Ain't nobody got time for that, and I've got a real job to be doing.

That's why your highest voted response isn't the one with the cleverest interpretation of the challenge, the best integration of AI/LLM, the most realistic, or the most efficient. It's the one that freaking formatted their entry to look like a pacifier/binky/dummy/soother/whatever you call it in your flavor of English.

4. Entries shouldn't become visible until the contest is over.

Once entries are visible, even to other entrants, you can vote on them. This directly influences fastest-gun issues. Especially when you have 35+ responses, anything new is going to lag behind things already submitted and voted on.

Especially for popularity contests, consider having a submission phase and a voting phase.

Plus, it helps with the fact that I've visited now and voted. What incentive do I have to come back later to vote again for late entries?

The challenge ends June 3, which is presumably when the votes will be frozen (hopefully you added that functionality...), so any submission on June 3 will likely be both entirely valid and never seen.

5. It's hard to see where one entry stops and the next begins.

I called this out in another section, but after reopening the challenge and looking at the responses I've decided to promote it to a full-on callout.

What these responses really drove home for me is how bad the comment threading format is for answers. We don't want answers in comments, so that's a good thing. But for real -- open up the challenge response and just scroll using the scrollbar ... it's very difficult to see where one entry ends and the next begins.

In a regular Q&A, there's a clear delineation where one answer ends and the next begins. You've got a horizontal line, some useful poster/post date info. In this interface, you've got a bit of whitespace and a (left) border gap. The comment functionality has always been more "conversational" so this made sense -- I ask a clarifying question, someone else might reply to me or ask another question ... all of these are related to the post we're commenting on so it doesn't necessarily need this kind of visual separation. But here we're using it for answers, and they're all starting to blur together. I actually felt tired after reading those two or three answers I got through, which is not something I've ever experienced reading regular answers on Q&A.


I think the biggest problem, outside of the user interface, is the "too many entries" problem. You got 35 entries on a "beta" or PoC/test run. How many are you expecting in the real world? Thousands?

Don't get me wrong, the three I did read were very interesting. Well written, explained their thought processes and why they chose to do what, and not just code dumps. I gave them upvotes. (Or just didn't vote on them.) But reading them took time and effort, and I don't have that in abundance. As mentioned, I had a 15 minute break at work and decided to see what fell out of this experiment.

Most answers will basically be never seen unless the sorting algorithm bubbles them up to a user (see point 2).


The other thing I noticed is that I'm not really seeing how this is going to foster community. I didn't have the urge to comment on any of the answers I did read. What would I say? "Good job?" That's just noise. The challenge responses weren't in languages I do much in (if anything). There's only so many ways to transform "little" to "widdle" or other baby speak nonsense, so there wasn't anything particularly novel. I don't really have incentive to do an in-depth code review to figure out if there's any non-obvious logic flaws in any of the code. And the ones that I wanted to downvote because they missed the point or weren't following the spirit of the challenge, I couldn't -- and I know better than to leave a "mean" comment saying anything negative.

6
  • 1
    "There's a name for this phenomenon but I don't recall what it is." I think you're thinking of "Snowball effect" Commented May 31 at 3:52
  • 1
    "That's why your highest voted response…[is] the one that freaking formatted their entry to look like a pacifier…" That's not because of scroll fatigue or any of the factors you identified there. It's because that is objectively one of the most clever and unique submissions. Unlike ~50%, it didn't use AI to generate the whole thing; that automatically puts it ahead. The rest of the ones that might have been actually written by a human aren't particularly clever. Some didn't even follow the challenge rules/spirit. Besides, obfuscated code is a long-standing tradition for code challenges. Commented May 31 at 9:43
  • 1
    Sure. It's an old enough tradition that it's just old hat. There's JavaScript obfuscators that will reformat code into shapes for you at the same time, not like someone made the effort to do it by hand. Commented Jun 1 at 10:57
  • 1
    True, not by hand, but I did have fun writing the little script that provided the ascii art, since I had a (self-imposed) criteria that I not unused characters and I didn't want it to look too obvious I was cutting off blocks from the final image. Commented Jun 1 at 15:59
  • 2
    You raise a lot of good points here. Many of the issues you call out stem from the way we decided to implement this test, by temporarily taking over the Discussions feature. Because of this we inherited Discussions' particularities including no downvotes, no pagination, the user interface, and so on. This was an efficient way for us to test the waters in terms of SO users' interest in coding challenges, for the purpose of this experiment. However, if we do go on to further develop this feature in its own right, we would make more intentional choices about how it should look and function. Commented Jun 2 at 20:48
  • 1
    I recognize that: it does call into question the utility of discussions for much the same reasons. That said, the core issue I ran into, which was the sheer number of responses, is UI agnostic. I'm not aware of any platform with non-automated scoring that can handle this sort of volume well, especially when some people are just doing code dumps. Commented Jun 3 at 14:21
20

https://stackoverflow.com/beta/challenges/79640866/code-challenge-1-implement-a-text-to-baby-talk-translator

For more context on what this is and why we’re doing it, you can check out this post on Stack Overflow Meta. If you have feedback on the challenge itself, that’s the place to send it!

I personally don't feel comfortable giving this particular kind of feedback here, but this is where I have been told to put it so...

To measure success in this experiment we are going to evaluate several different metrics including the numbers of users that visit the challenge posts, submit an entry to one (or both) challenges, participate somehow (submit entry, vote, comment, etc.), as well as some qualitative feedback that we will collect.

I just wanted to say that there is a chance some people won't really submit any entry not because they don't really like the idea per se or the challenge in question but simply because the challenge just so happens to be too hard or not suitable giving the expertise of users who might like to participate. For instance in my case, I'd really love to submit something but I just can't even think about anything because it feels like it would only make sense to send some sort of AI code capable of natural language processing and that seems too complex for someone who has never worked with AI like me. The only things I can think of:

  • Make use of ChatGPT API. Problem is that relying on other AI's API doesn't feel like a proper solution at all.

  • Replace some words like dog with doggy or train with choo-choo. If I only did that it would seem like I am not even trying to take this challenge seriously.

Because of this, I don't think I'll bother with this challenge in particular. But the problem is that if I don't and many other people don't either for the same reasons then the staff might think these coding challenges aren't worth it because not that many people are showing interest, when the problem in reality might be that the challenge they choose as the first one is just not for everybody.

7
  • 8
    Yeah, opening the challenge, my gut reaction was: "This isn't a challenge. This is work." This seems way too involved for a challenge. If you want to do this properly, geez.... Commented May 28 at 7:44
  • 1
    If it's going to step into the direction of being open-ended, which I think it is, compared to what I've seen on the codegolf site, I think it would have been nice if it leaned more into that to draw out more creativity. maybe not as much as something like what I've heard hackathons are like, but... something a bit towards that direction. Commented May 28 at 7:47
  • Your feedback is noted (as well as the upvotes, indicating that others agree with you). It would be hard to find a challenge that works for everyone who wants to participate, but we will keep this in mind (that there are people who are definitely interested in this kind of feature but just not the particular challenge we chose). There will also be a second challenge as part of this test, beginning on Tuesday June 3rd, which hopefully suits you better. Commented May 28 at 14:35
  • We appreciate the feedback! I've been the person over here coming up with ideas for challenges. We definitely wanted to focus on more potentially creative challenges, allowing for both possible "types" of solutions you suggested, so people can participate from all levels of programming experience. As Sasha mentioned, we'll be creating a substantially different challenge for the second one to hopefully capture more or different people's interest! Commented May 28 at 15:26
  • 8
    Yeah, I don't know how else you would even set out to do this particular challenge, other than (A) a table-driven lookup (which is a poor solution from an engineering perspective, not to mention absolute drudgery trying to create a table of all possible substitutions) or (B) some kind of learning algorithm that effectively implemented (A) without the manual drudgery. It is not a good challenge, and not what I expected when this was announced. It leans too hard into trying to be "creative", rather than trying to appeal to programmers. Made worse by blocking visibility unless you submit. Commented May 28 at 17:17
  • 6
    It's for sure an awful challenge prompt and the fact that it seems to be specifically well-suited to some kind of LLM-based solution despite AI-generated content being banned on SE seems... very poorly thought out. Commented May 28 at 20:19
  • 1
    I would also have the same thoughts. And also, that for a particularly good implementation, I feel you need a full repo to do this with installation steps (pip install), README, etc. Unless I've completely missed the challenge. Commented May 29 at 23:19
19

When I saw what the challenge was, my first thought was "why would I want to learn about baby-talk to implement this?" I can't use that knowledge anywhere else. I had a very strong negative reaction to combining baby-talk with what I thought was a professional skill challenge that I'm still unpacking. I couldn't possibly discuss baby-talk with the young people on the team I'm leading. I'm an old woman and it would be creepy.

I expected something like "decrypting" a secret message, or something involving flocking/self-organizing behavior or some other technical topic that you could just scratch the surface of or dig into. I would like to see a topic which might have something useful to learn even if the actual prompt was playful. Genetic algorithms. Huffman encoding. Graph transversal. Give participants something to chew on.

I understand that it is not easy to invent a coding challenge that hasn't already been done. This is my advice for the next challenge:

  • Significantly reduce the scope and make the challenge easier to judge by having some determinate behavior. If I give the same input to all the entries, all of the valid ones will have some minimum output that unequivocally demonstrates they were successful at the challenge.
  • Choose a topic that doesn't involve children.
  • Don't try to anticipate how people will be creative with the challenge-this is a prototype. It would be better to be too simple than too complicated. It should have a part that anyone can accomplish and enough room to let people showcase their creativity.

Searching for baby talk translators online yields some AI-driven results but not much else.

That makes this a bad prompt, not a good one. Hardly anyone is interested enough in baby-talk to do anything worthwhile around it software-wise. Just because something has been done before doesn't mean there is no room to do something creative with it.

4
  • I'd actually personally be interested to see creativity leaned into more in the challenge definition. Commented May 29 at 19:19
  • @starball Anticipating ways someone else could be creative actually restricts the challenge by framing it to fit one perspective. Just make it open enough to let people surprise you. Commented May 29 at 19:29
  • 11
    Even after learning about "baby-talk" the challenge did not make more sense. Commented May 29 at 19:46
  • @ColleenV yes, making it relatively more open so people can do those interesting and surprising things is what I'm interested in. highly-specced-out challenges about optimizing performance and such is fun in its own way, but I'd rather see the creative stuff. Commented May 29 at 20:29
16

See Berthold's comment

Non-whitelisted HTML-like text is incorrectly stripped from Discussions post markdown (also see Discussions garbles code)

This is especially problematic in (and possibly lots of other technologies) as it uses <- for assignment. It will adversely affect the experience of users trying to answer coding challenges with code.

I have already brought this up with staff privately but wanted to put it out here for reference.

13

This is an interesting concept which feels like there could be overlap with two communities:

  • Code Golf — any "code challenge" to write the shortest amount of code.

  • Code Review — code challenges are the bread and butter of that community, but those are primarily non-Stack Exchange code challenges.

It will be interesting to see the impact (if any) on those two communities. Be on the lookout for the potential for interplay; can one feed off the other? Can I repost an SO code challenge as a code review? Can I repost an SO code challenge on Code Golf provided the "winning" metric is not the same?

3
  • 5
    I think these are all good questions which could be explored further if we do end up continuing to develop this feature beyond testing. Commented May 23 at 13:53
  • 2
    Code Golf is a close match for the structure (question-challenges and answer-solutions) but Code Review feels like a closer match for the content—the code doesn't need to be anything special and there's only a few basic rules to follow while posting it. Actually, participating on these other sites is a good way for a new user to gain a small amount of rep on SO, though SO redirecting a lot of new users all at once to a single site would cause moderation challenges. Commented May 23 at 16:28
  • 3
    Code Golf (the site) is not solely about shortest amount of code. Check the tag code-challenge to see what maybe closer to the Coding Challenge feature proposed here. Commented May 25 at 12:32
10

Good idea, but clearly, no one took it seriously

While I love the idea, it is very clear from Challenge #1 that something is wrong: absolutely no one has respected the initial instruction of the challenge:

Imagine you were talking to a baby. You would probably modify your speech to make it more engaging and accessible.

I have checked every single entry. All of them are either a hardcoded dictionary that changes common words into the same word but with additional "w" (I have actually counted 18 out of 45 entries that change "hello" into "hewwo". How original.), LLMs, or duplicated syllabes / replaced letters.

The challenge was to turn YOUR (as an adult) speech into something a baby can understand. Not turning a wall of text into what a baby would say (emphasis mine):

Baby talk is a pattern of language used when adults talk to babies.

Not when babies talk to adults. Unless you say "Hewwo" to your baby, which I hope you don't, since improperly talking to your baby can lead your baby to then talk improperly and struggle to pronounce words correctly.

I expected to see entries that would do thinks like substituting complex or uncommon formulations and words with more common ones ("said he" -> "he said", "enquired" -> "asked", ...) with highly clever tricks to detect english speech patterns programatically.

I feel like everyone took the challenge as a joke and tried to be funny. It turned from a coding challenge to a joke challenge (also evidenced by the fact that the most upvoted entry is the one who wrote their code in a funny shape).

Here are, in my opinion, the reasons why this happened.

Small time windows can lead to low effort and lack of originality

One week is a very short amount of time. Many of us have a job and might not have enough energy or motivation to do coding challenges during week days because of that, making week-ends or off-days the only timeframe to work on such a challenge.

This essentially reduces coding time from 7 days to 2 days for many, many people, which is not enough to make something good.

This also encourages looking at what others have done instead of coming up with an idea, because finding something original is often the most time-consumming thing in a programming project.

Maybe give us 2/3 weeks?

Lack of downvotes

Since you can't downvote an entry, people who think entries are funny (no matter if they are actually as the instructions asked nor if they employ clever programming techniques) will upvote. People who would have prefered more serious entries can only stay neutral, they can't even upvote serious entries since there aren't any.

Entries visibility

I see you tried to keep it fair by having other entires hidden for the first half of the timeframe. Considering the challenge started on a Tuesday, this means that entries were visible by the week-end when people usually have more time to work on this kind of things.

Entries are (by default) sorted by how upvoted they were, and all put on a single page. This should be randomized, as having the most upvoted one on top will just make it receive more upvotes naturally than whichever entry was just posted and has no upvotes.

Entries should be, in my opinion, hidden for the whole duration of the challenge, and then followed by a week-long voting phase (like moderator elections) where they would be displayed randomly to each user.

Give example on what the objective may be

You gave sample text to translate, but I believe the objective of the contest would have better been understood if you provided a sample translation for those too, just as an example of what you meant by "modify[ing] your speech to make it more engaging and accessible".

Participants, please, don't forget: serious can be funny, too!

Something that needs to be reminded is that coding challenges can be just as funny as joke challenges.

It is entertaining to write a program that performs an interesting task, documenting yourself about it, think of solutions, etc. It doesn't have to be a "hello" to "hewwo" dictionary to be funny.

Yes, challenges are meant to have fun, but you wouldn't use a car in a running contest, even if it'd be funny to look at, would you?

Conclusion

I feel like Challenge #2 is a much better approach and less prone to misinterpretation, but many of the points raised in this answer still stand.

11
  • 4
    "The challenge was to turn YOUR (as an adult) speech into something a baby can understand." to be quite honest, this was exactly my understanding of the challenge. But then I have no idea what would that actually entail. I can only think of two ways to talk to babies - using the same language but change your delivery - enunciate properly, follow a different rhythm. I don't know how that can be used to transform text, though except if you just return the input. At best, change longer words to shorter alternatives. Commented Jun 4 at 10:17
  • 1
    The other way is use language similar to that of babies - maybe say "bunny" or "bun-bun" instead of "rabbit". Which is sort of the direction that many entries went to. And softened the language even more with stuff like "hewwo", which isn't what I expected to be the result. But I read the challenge several times, checked the linked Wikipedia article, too and couldn't figure out what was actually wanted by the challenge. Commented Jun 4 at 10:17
  • @VLAZ Well, I expected to see entries that would do thinks like substituting complex or uncommon formulations and words with more common ones ("said he" -> "he said", "enquired" -> "asked", ...) with highly clever tricks to detect english speech patterns programatically.. That's what makes (imo) the most sense when it comes to adapting an adult's language to a baby. There are many word databases with synonyms, word commonness, etc, that could have been used to swap hard words with easy ones, for example. Commented Jun 4 at 10:20
  • 1
    While "bunny" for "rabbit" sounds good with this, "hewwo" for "hello" and using "bun-bun" completely miss the point of making speech more accessible. If anything, this will either confuse the kid, or make them learn something wrong. This challenge could have been educational and very useful, but it became a joke contest. I understand how interpretation could differ though, but I find it quite surprising/disappointing that essentially everyone agreed to go the easy joke-y path rather than thinking the challenge through. Commented Jun 4 at 10:23
  • 1
    Yep, I acknowledge yours is a good interpretation. But when I was reading the challenge (several times) I didn't arrive to a single unambiguous expectation. So, eventually I just gave up. Commented Jun 4 at 10:23
  • 3
    Either way, I would have expected "Imagine you were talking to a baby. You would probably modify your speech to make it more engaging and accessible. [...] Baby talk is a pattern of language used when adults talk to babies." to have been clear enough. Commented Jun 4 at 10:24
  • @VLAZ Well, fair enough. What made me give up was the lack of time, honestly. Commented Jun 4 at 10:24
  • Very good ideas in this answer. The only aspect I see differently is the participants made it a joke. I hope not. My guess would be that they didn't know better and actually wanted to participate seriously but somehow couldn't. As for aesthetics: I liked the funny shaped contribution, but didn't upvote it. I probably also wouldn't have downvoted it because of the shape. Commented Jun 4 at 13:01
  • @NoDataDumpNoContribution Thanks for the kind words! Well, maybe, but I don't get how could a community so full of experts could have "not known better" and/or "couldn't" participate seriously. Had there been a some serious entries, sure, but there are essentially none (I don't count LLMs doing the job as a serious entry), so I can't really wrap my head around the possibility that participants didn't voluntarely turn the challenge into a joke. Kudos to you for resisting to the shape-based upvote temptation! I would have probably downvoted it if it hid better entries, but well. Commented Jun 4 at 13:13
  • "It turned from a coding challenge to a joke challenge" -- what's wrong with that? Aren't we trying to inject a little fun back into StackOverflow? This isn't life-or-death. We don't need to be so serious. In my opinion, we could use a little "not serious" on SO. Commented Jun 18 at 15:55
  • @GregBurghardt because (1) serious challenges can be fun too, and (2) the challenge was advertised as a coding challenge, not a joke challenge. While SO could use some fun, I doubt turning a whole coding challenge into a joke helps anything at all. Commented Jun 18 at 16:59
7

Is this going to be pre-screened for prohibited content before the first batch of "solutions" is published?

3
  • Not by Discussions moderators as they are not visible to us. (well, I have looked at some of the submissions using the unfortunate "bugs" of Discussions UI, but not sure if others have done that). Commented May 28 at 7:18
  • 1
    Ironically, there are GPT-powered baby-talk translators out there... And honestly, that seems like the way to go for this "challenge", which feels quite deliberate. Commented May 28 at 7:49
  • 4
    One of the two I checked were obviously ai generated, even the text portion of the response appeared to be entirely generated Commented May 28 at 17:44
6

Currently, you cannot undelete self-deleted challenge posts.

Allow undeleting self-deleted challenges

I often delete things when I've made a mistake and want to fix it before undeleting. Can this be fixed.

5

I look forward to this launching, I think it'll be a fun thing to engage with!

However, I just have a little concern about using answer engagement as a winning criteria. It's an effective method, but does require a little consideration to ensure it's fair.

You may have heard about the Fastest Gun in the West problem. Simply put, the first answers posted get more votes/interactions than newer answers. Additionally, if answers are sorted by score by default, then newer answers have a lower chance of being seen.

I'm sure there's been internal considerations about this, and that there'll be something in place to ensure all answers get a chance to have a level of engagement. However I think it's still worthwhile making sure it's explicitly handled before anything launches. Additionally, discussing it now means people can give feedback if needed :p

Unrelated, but I do appreciate that y'all came to check out The Nineteenth Byte as I suggested in my answer on the initial question. It's great to see community engagement/exploration from SE, especially when the community in question is very pertinent!

1
  • 6
    Thanks for the thoughts - one way we're trying to mitigate the fastest gun problem re: voting is to hide replies from everyone except for those who have participated for the first three days. We will see if it helps, and if you have any other ideas we are all ears! Commented May 27 at 14:16
5

Now that it's all said and done, here's my feedback:

I personally enjoy having an outlet for creativity, so when I saw the challenge, I decided to give it a shot. I had fun doing it-though probably for the wrong reasons-and the initial response was honestly a letdown.

About the challenge itself: I think there's an interesting idea here. Baby talk isn't inherently exciting, but there are certainly clever ways to build a stemming and lemmatization system to convert normal text into baby talk. That said, any serious solution would likely involve data files or artifacts that are hard (or impossible) to include inline with code in a small text box.

What I did: One of my favorite coding challenge answers is this one that generates an image of all the colors. It solves the main task while also doing something interesting on the side. That's something I try to do too-add a creative twist to keep things fun. So, I created a simple ASCII art version of my solution. It's far from perfect. My actual answer is very simple because I hit a wall trying to invent a better algorithm. So I fell back on what I know:

pygame app

I built a small GUI to generate the ASCII art image. For what it's worth, you can see my work here on GitHub. This part was by far the most enjoyable-and ironically, had nothing to do with the actual challenge. It just gave me an excuse to have fun building a little tool.

Takeaways: I'd really prefer if all code had to be posted outside the description. The nested scrollbars everywhere get old fast. Even if it links to a Stack Overflow-hosted page, it's more convenient-and inclusive-for people with different toolchains to link to their Git platform of choice. And if this does catch on there are some serious usability issues. As of writing this, there are 46 answers, and the page is over 50,000 pixels tall. Good luck scrolling through that to read and vote. Pagination would help a lot. Something like defaulting to a random sort might give more visibility to creative answers and reduce the snowball effect (though it's not a perfect fix).

The not-so-great parts: When I first submitted my answer, I clicked "share" and sent it to friends with a "Hey, look at this silly thing I made!" But what they saw was a challenge with no answers-so my post didn't show up. Kind of embarrassing. I wish the share link gave a heads-up or preview of what others will actually see. Then I got accused of using an LLM. Maybe because the string "LLM" appears in the post, or maybe someone really thought I used one. Either way, it was a bit demoralizing. I wasn't expecting a stream of meaningless pleasant words, but I thought I'd done something clever and was proud of it.

Ah well-at least I can now put "Award-Winning Developer" on my profile. I'll take it.

1
  • 1
    Re: LLM thing. Sorry to hear it... I for one felt this deserved the win it got. I think maybe if they'd disallowed AIGC at all in the first place (what were they thinking...), a lot of the distrust in these submissions could have been avoided. But sometimes you just have to rest in the fact that you did something fun and cool. Commented Jun 6 at 13:53
4

I'm following the post and getting notifications about submissions, but when I click on them, it takes me to the discussion thread and I don't see them there. Is this intentional?

4
  • The challenge claims "For the first three days, other entries are only visible once you have submitted your own.", but apparently this limitation was implemented rather sloppily. Commented May 27 at 22:26
  • From the post: May 30: All entries visible to everyone Commented May 28 at 7:14
  • @M-- about revision2, okay... but then there's a bug(s). staff would presumably want to know about the bug. Commented May 28 at 7:20
  • @starball believe me, they know :) p.s. it has a post with status-review even (I think Jeremy posted something). Commented May 28 at 7:21
4

Just one observation from the results to the first challenge: submission without output for the example input (either given in the box or as link) are not so nice because one would need to run the program oneself. Maybe require that if the output can be variable (it was in challenge 1).

Also there should be some accompanying text. What is the name of the language, what technique was used? After all readers may want to learn something. Maybe also require this.

And what about a period reserved for clarifications (like asking for the character set to encode in challenge 2) with comments below the challenge. There should be a time reserved for this before the clock officially ticks for solving the challenge.

2

It took me more than it should to find the button to add a comment to a challenge entry. This is because the icon used for that is not the most appropriate. (Yes, I know, it says "Reply" right next to it, but I think we can change this nonetheless).

Current comment icon here in StackOverflow

In the context of discussions where everything are comments like in a forum or an email conversation it makes more sense, but here "Reply" means adding a small comment so maybe a comment icon would fit better. In fact, we could also change the "Reply" text to "Comment".

This is the icon used in Twitter for that:

Twitter comment icon

And deviantart:

Deviantart comment icon

In fact, that icon probably makes more sense when sharing rather than replying. It's the icon Youtube uses just for that:

Youtube share icon

The "Share" icon looks like a link but since that button only shows a link to go straight to the comment I guess it makes sense to keep it that way. But I think we can change the "Reply" text and icon.

1
  • The arrow pointing to the left is typically used as a reply icon in e-mail clients, like Outlook and GMail. It is coherent with the introduction of threaded replies, though I do agree that a "Comment" button should have a different icon, yes. Commented Jun 6 at 7:26
2

I stumbled on this "feature" only today.

I have to ask. Exactly... why should SO be the host for this content when you already have at least TWO sites that overlap this topic - Coding Golf and Puzzling? If you were trying to promote participation wouldn't a much smarter solution be promoting those site so that users may get engaged in new communities instead of cluttering SO with content most users won't probably be interested in as they perform their daily content curation routine?

Even if you taught that your "challenges" weren't fitting the existing sites... launching a new one and then promoting it thru a link would be smarter. If anything, with an actual site you may get people to actually post new challenges too instead of having to come up with those yourself (and let's be honest, considering the company lack of focus, give it a month and new challenges will start to come in 6-8 time units....).

2

If you've ever decided to bring Challenges back after this round of experiment, please have the deleted posts at the bottom of the page. This issue exists with Discussions as well (same UI), but it was not as annoying there since only a handful of posts have had that many replies, especially long ones (deleted and non-deleted).

I understand that only moderators (Discussions Mods or Diamond Mods) have this issue, but this is still a critical one as it makes navigating the page and doing any sort of moderation extremely difficult; see the screen capture*. Moreover, everyone would like to see expanding curation abilities of users, for Discussions (and Challenges), including showing deleted replies (and submissions) to 10k+ rep users. So when/if that happens, this issue would affect more users.

I ended up writing my own userscript to collapse the deleted posts, but I'd rather have them at the bottom and not in the middle of everything.


* Only if we could upload Animated WebP, this is the best quality gif that I could manage to get under 2MiB.

0
0

Can you enable images in replies? I saw this submission and wanted to let the author know that there is an error that won't allow me to try it. I wanted to send an screenshot but I couldn't include it in my reply so I had to write the error message that I got by hand instead.

Also, is this specific to challenges or did this also happen in discussions?

-8

I wanted to submit a reply to a post but got this error:

Reply must be at least 30 characters.

Does this make sense in the context of challenges, considering that it is meant to be a place to bring more user engagement (presumably without having to worry about quality and correctedness as much as you do when submitting questions and answers)?

Specifically, this is the challenge submisssion I wanted to answer to. I only wanted to say "Wtf".

1
  • 1
    Un-meta comment since you wanted to ask: I felt the coding challenge was not very good, so I went with the ironic approach. However, in retrospect, I think the top submission captures that spirit better. Subjectively, I think the program itself is a little over-engineered, but the meme value overshadows this. Commented Jun 3 at 13:53

You must log in to answer this question.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.