Tootfinder

Opt-in global Mastodon full text search. Join the index!

@arXiv_csSE_bot@mastoxiv.page
2024-04-17 06:52:57

Quality Assessment of Prompts Used in Code Generation
Mohammed Latif Siddiq, Simantika Dristi, Joy Saha, Joanna C. S. Santos
arxiv.org/abs/2404.10155

@aredridel@kolektiva.social
2024-04-13 15:02:33

I've worked on community groups for a long long time, and the only good thing I can say about most codes of conduct is that their existence proves the group fought past the army of dudes who think they get in the way of important things like letting them dominate the group.
But seriously, most codes of conduct are worth about one bit of information: "has cared at all (y/n)”
There's a single code of conduct document that was extremely influential by being designed to be copy-and-pastable: the document was given a specific name, work was done to propagate the idea that all you had to do was adopt it as-is. Drop in and ready to go!
The only problem there is that doesn't work. A long, legalistic set of rules about what's Not Allowed with no actual policy for enforcement invites a bunch of problems: a long list can be treated as exhaustive, so people will do things not on the list then cry foul when you tell them to stop. A lack of enforcement policy invites a binary approach: is a person good (did nothing on the list) or bad (did something on the list)? If they're bad, kick them out, if they're good, keep them.
This is bad.
The actual rules that will be enforced will be much more subtle, will favor people in positions of power, and will not yield results consistent with the stated values of various factions of the group. Arguments will ensue about whether or not something "really counts" as an item on the list, because often the actual decision being made but not explicitly stated is “do we kick out some important person to the group for some broken way they relate to others in the group?”
The other way they get used is "here's a person doing something some part of the group doesn't like, which rule can we use to kick them out?”
These are both broken approaches that don't actually reflect the relations of the group, and they lead to punitive and destructive methods of enforcement, rather than healing and reparative methods. This leads to conflict within the group being turned into a code of conduct violation while at the same time allowing outsiders to weaponize the code of conduct by provoking those conflicts.

@arXiv_csSE_bot@mastoxiv.page
2024-04-17 06:52:57

Quality Assessment of Prompts Used in Code Generation
Mohammed Latif Siddiq, Simantika Dristi, Joy Saha, Joanna C. S. Santos
arxiv.org/abs/2404.10155

@whitequark@mastodon.social
2024-04-13 08:39:00

The #GlasgowInterfaceExplorer project now has a written Code of Conduct documenting our practices!
I encourage you to read it: github.com/GlasgowEmbedded/gla<…

@aredridel@kolektiva.social
2024-04-13 13:24:09

github.com/GlasgowEmbedded/gla
Now that's how you write a code of conduct.

@arXiv_csCL_bot@mastoxiv.page
2024-05-08 08:32:38

This arxiv.org/abs/2404.12489 has been replaced.
initial toot: mastoxiv.page/@arXiv_csCL_…

@aredridel@kolektiva.social
2024-04-13 15:09:42

Where the Glasgow Embedded code of conduct avoids this is by being broad strokes, and pretty clear about who the project is run by. It's much more constitutional in nature, and by being vague about the specific problems but specific in who will care and act on them, it's much easier to build a coherent group around, and the specific issues they care about are much more likely to have a unified response. It's much harder to weaponize because there's a who embedded with the what: it's not up to argument whether something "counts" or not. The core group of people who made the project are going to decide and they’re not going to put up with any anti-trans rhetoric in this case. They're gonna be okay on racism, if not perfect. You can see how it'll land if there's conflict, and the conflict is largely going to be technical _or_ social , but not both. This is way easier to deal with.

@arXiv_csCR_bot@mastoxiv.page
2024-03-07 08:25:05

This arxiv.org/abs/2309.04909 has been replaced.
initial toot: mastoxiv.page/@arXiv_csCR_…

@poppastring@dotnet.social
2024-03-28 01:59:34

***Israel Deploys Expansive Facial Recognition Program in Gaza***
The experimental effort, which has not been disclosed, is being used to conduct mass surveillance of Palestinians in Gaza, according to military officials and others.

@arXiv_csHC_bot@mastoxiv.page
2024-05-01 07:17:23

Can humans teach machines to code?
C\'eline Hocquette, Johannes Langer, Andrew Cropper, Ute Schmid
arxiv.org/abs/2404.19397 arxiv.org/pdf/2404.19397
arXiv:2404.19397v1 Announce Type: new
Abstract: The goal of inductive program synthesis is for a machine to automatically generate a program from user-supplied examples of the desired behaviour of the program. A key underlying assumption is that humans can provide examples of sufficient quality to teach a concept to a machine. However, as far as we are aware, this assumption lacks both empirical and theoretical support. To address this limitation, we explore the question `Can humans teach machines to code?'. To answer this question, we conduct a study where we ask humans to generate examples for six programming tasks, such as finding the maximum element of a list. We compare the performance of a program synthesis system trained on (i) human-provided examples, (ii) randomly sampled examples, and (iii) expert-provided examples. Our results show that, on most of the tasks, non-expert participants did not provide sufficient examples for a program synthesis system to learn an accurate program. Our results also show that non-experts need to provide more examples than both randomly sampled and expert-provided examples.

@Techmeme@techhub.social
2024-03-27 11:05:32

Israeli officials detail an expansive and experimental facial recognition program in Gaza to catalog Palestinians without their knowledge, starting in 2023 (Sheera Frenkel/New York Times)
ny…

@arXiv_csSE_bot@mastoxiv.page
2024-05-02 06:52:57

CC2Vec: Combining Typed Tokens with Contrastive Learning for Effective Code Clone Detection
Shihan Dou, Yueming Wu, Haoxiang Jia, Yuhao Zhou, Yan Liu, Yang Liu
arxiv.org/abs/2405.00428

@whitequark@mastodon.social
2024-04-25 10:17:38

OH: "i will commit a code of conduct violation"

@arXiv_csSE_bot@mastoxiv.page
2024-02-20 06:59:03

Evaluating Program Repair with Semantic-Preserving Transformations: A Naturalness Assessment
Thanh Le-Cong, Dat Nguyen, Bach Le, Toby Murray
arxiv.org/abs/2402.11892

@arXiv_csSE_bot@mastoxiv.page
2024-04-03 08:41:34

This arxiv.org/abs/2404.00640 has been replaced.
link: scholar.google.com/scholar?q=a