Maxlath
A belated welcome!
editHere's wishing you a belated welcome to Wikipedia, Zorglub27. I see that you've already been around a while and wanted to thank you for your contributions. Though you seem to have been successful in finding your way around, you may benefit from following some of the links below, which help editors get the most out of Wikipedia:
- Introduction
- The five pillars of Wikipedia
- How to edit a page
- Help pages
- How to write a great article
Also, when you post on talk pages you should sign your name on talk pages using four tildes (~~~~); that should automatically produce your username and the date after your post.
I hope you enjoy editing here and being a Wikipedian! If you have any questions, feel free to leave me a message on my talk page, consult Wikipedia:Questions, or place {{helpme}} on your talk page and ask your question there.
Again, welcome! -- Irn (talk) 17:46, 9 November 2011 (UTC)
September 2013
editHello, I'm BracketBot. I have automatically detected that your edit to Mediapart may have broken the syntax by modifying 2 "[]"s. If you have, don't worry: just edit the page again to fix it. If I misunderstood what happened, or if you have any questions, you can leave a message on my operator's talk page.
- List of unpaired brackets remaining on the page:
- Article by Scott Sayare, published by the [[New York Times]] - March 19, 2013]]</ref>.
Thanks, BracketBot (talk) 22:56, 18 September 2013 (UTC)
Merger discussion for Vitalik Buterin
editAn article that you have been involved in editing—Vitalik Buterin —has been proposed for merging with another article. If you are interested, please participate in the merger discussion. Thank you. Jtbobwaysf (talk) 16:15, 29 April 2016 (UTC)
JavaScript RegExp problem
editI noticed you have experience in JavaScript. I'm hoping you can help me with a problem I've run into writing a userscript.
Please see my post at Wikipedia talk:WikiProject JavaScript#Nested RegExp.
Thank you. The Transhumanist 12:22, 5 May 2017 (UTC)
Wikidata-cli
editJust couple questions about wikidata-cli
- Does wikidata-cli support qualifiers or multirow sources?
- What datatypes it supports as value?
Example use case could be Jaakko Koskela (Q29961490) and P3602 (candidacy in election) in that page. I used pywikibot to save that but would be nice to have more general solution. --Zache (talk) 07:26, 25 May 2017 (UTC) (edit: no wikidata P-template in enwiki fix)
Facto Post – Issue 9 – 5 February 2018
editFacto Post – Issue 9 – 5 February 2018
Wikidata as HubeditOne way of looking at Wikidata relates it to the semantic web concept, around for about as long as Wikipedia, and realised in dozens of distributed Web institutions. It sees Wikidata as supplying central, encyclopedic coverage of linked structured data, and looks ahead to greater support for "federated queries" that draw together information from all parts of the emerging network of websites. Another perspective might be likened to a photographic negative of that one: Wikidata as an already-functioning Web hub. Over half of its properties are identifiers on other websites. These are Wikidata's "external links", to use Wikipedia terminology: one type for the DOI of a publication, another for the VIAF page of an author, with thousands more such. Wikidata links out to sites that are not nominally part of the semantic web, effectively drawing them into a larger system. The crosswalk possibilities of the systematic construction of these links was covered in Issue 8. Wikipedia:External links speaks of them as kept "minimal, meritable, and directly relevant to the article." Here Wikidata finds more of a function. On viaf.org one can type a VIAF author identifier into the search box, and find the author page. The Wikidata Resolver tool, these days including Open Street Map, Scholia etc., allows this kind of lookup. The hub tool by maxlath takes a major step further, allowing both lookup and crosswalk to be encoded in a single URL. Linksedit
Editor Charles Matthews, for ContentMine. Please leave feedback for him. Back numbers are here. Reminder: WikiFactMine pages on Wikidata are at WD:WFM. If you wish to receive no further issues of Facto Post, please remove your name from our mailing list. Alternatively, to opt out of all massmessage mailings, you may add Category:Wikipedians who opt out of message delivery to your user talk page.
Newsletter delivered by MediaWiki message delivery |
MediaWiki message delivery (talk) 11:50, 5 February 2018 (UTC)
Facto Post – Issue 10 – 12 March 2018
editFacto Post – Issue 10 – 12 March 2018
Milestone for mix'n'matcheditAround the time in February when Wikidata clicked past item Q50000000, another milestone was reached: the mix'n'match tool uploaded its 1000th dataset. Concisely defined by its author, Magnus Manske, it works "to match entries in external catalogs to Wikidata". The total number of entries is now well into eight figures, and more are constantly being added: a couple of new catalogs each day is normal. Since the end of 2013, mix'n'match has gradually come to play a significant part in adding statements to Wikidata. Particularly in areas with the flavour of digital humanities, but datasets can of course be about practically anything. There is a catalog on skyscrapers, and two on spiders. These days mix'n'match can be used in numerous modes, from the relaxed gamified click through a catalog looking for matches, with prompts, to the fantastically useful and often demanding search across all catalogs. I'll type that again: you can search 1000+ datasets from the simple box at the top right. The drop-down menu top left offers "creation candidates", Magnus's personal favourite. m:Mix'n'match/Manual for more. For the Wikidatan, a key point is that these matches, however carried out, add statements to Wikidata if, and naturally only if, there is a Wikidata property associated with the catalog. For everyone, however, the hands-on experience of deciding of what is a good match is an education, in a scholarly area, biographical catalogs being particularly fraught. Underpinning recent rapid progress is an open infrastructure for scraping and uploading. Congratulations to Magnus, our data Stakhanovite! Linksedit
Editor Charles Matthews, for ContentMine. Please leave feedback for him. Back numbers are here. Reminder: WikiFactMine pages on Wikidata are at WD:WFM. If you wish to receive no further issues of Facto Post, please remove your name from our mailing list. Alternatively, to opt out of all massmessage mailings, you may add Category:Wikipedians who opt out of message delivery to your user talk page.
Newsletter delivered by MediaWiki message delivery |
MediaWiki message delivery (talk) 12:26, 12 March 2018 (UTC)
Discovering inventaire.io
editI had some ideas for improving inventaire.io to attract more users.
When I first visit a book page, I only see the basic catalog record. The site only provides info on book sharing of your friends, which for a new user like myself stumbling in, means it doesn't show anything about book sharing. While I understand this focus on friend groups may be part of your design (and makes a lot of sense to localize a geographic area for physical books), this makes the site of limited value for newcomers.
Perhaps it could show a stat eg. "242 users have copies of this book." to lure passers by to become registered users. Maybe you can affiliate with a bookseller to give them a small discount if they buy it through your site and add it to their library on your site.
Try to consider the user experience of a new user who discovers your site from a book page, rather than the home page and how to demonstrate your site's value to them right away.
Hope you appreciate my unsolicited advice and best of luck to you! There's not a ton of competition in the book info webpage market. Daask (talk) 10:08, 16 March 2018 (UTC)
- @Daask: thank you for the advice! Yes, there is definitely room for improvement on helping new user discover what this is all about :) -- Maxlath (talk) 19:05, 16 March 2018 (UTC)
Facto Post – Issue 11 – 9 April 2018
editFacto Post – Issue 11 – 9 April 2018
The 100 Skins of the OnioneditOpen Citations Month, with its eminently guessable hashtag, is upon us. We should be utterly grateful that in the past 12 months, so much data on which papers cite which other papers has been made open, and that Wikidata is playing its part in hosting it as "cites" statements. At the time of writing, there are 15.3M Wikidata items that can do that. Pulling back to look at open access papers in the large, though, there is is less reason for celebration. Access in theory does not yet equate to practical access. A recent LSE IMPACT blogpost puts that issue down to "heterogeneity". A useful euphemism to save us from thinking that the whole concept doesn't fall into the realm of the oxymoron. Some home truths: aggregation is not content management, if it falls short on reusability. The PDF file format is wedded to how humans read documents, not how machines ingest them. The salami-slicer is our friend in the current downloading of open access papers, but for a better metaphor, think about skinning an onion, laboriously, 100 times with diminishing returns. There are of the order of 100 major publisher sites hosting open access papers, and the predominant offer there is still a PDF. From the discoverability angle, Wikidata's bibliographic resources combined with the SPARQL query are superior in principle, by far, to existing keyword searches run over papers. Open access content should be managed into consistent HTML, something that is currently strenuous. The good news, such as it is, would be that much of it is already in XML. The organisational problem of removing further skins from the onion, with sensible prioritisation, is certainly not insuperable. The CORE group (the bloggers in the LSE posting) has some answers, but actually not all that is needed for the text and data mining purposes they highlight. The long tail, or in other words the onion heart when it has become fiddly beyond patience to skin, does call for a pis aller. But the real knack is to do more between the XML and the heart. Linksedit
Editor Charles Matthews, for ContentMine. Please leave feedback for him. Back numbers are here. Reminder: WikiFactMine pages on Wikidata are at WD:WFM. If you wish to receive no further issues of Facto Post, please remove your name from our mailing list. Alternatively, to opt out of all massmessage mailings, you may add Category:Wikipedians who opt out of message delivery to your user talk page.
Newsletter delivered by MediaWiki message delivery |
Facto Post – Issue 12 – 28 May 2018
editFacto Post – Issue 12 – 28 May 2018
ScienceSource fundededitThe Wikimedia Foundation announced full funding of the ScienceSource grant proposal from ContentMine on May 18. See the ScienceSource Twitter announcement and 60 second video.
The proposal includes downloading 30,000 open access papers, aiming (roughly speaking) to create a baseline for medical referencing on Wikipedia. It leaves open the question of how these are to be chosen. The basic criteria of WP:MEDRS include a concentration on secondary literature. Attention has to be given to the long tail of diseases that receive less current research. The MEDRS guideline supposes that edge cases will have to be handled, and the premature exclusion of publications that would be in those marginal positions would reduce the value of the collection. Prophylaxis misses the point that gate-keeping will be done by an algorithm. Two well-known but rather different areas where such considerations apply are tropical diseases and alternative medicine. There are also a number of potential downloading troubles, and these were mentioned in Issue 11. There is likely to be a gap, even with the guideline, between conditions taken to be necessary but not sufficient, and conditions sufficient but not necessary, for candidate papers to be included. With around 10,000 recognised medical conditions in standard lists, being comprehensive is demanding. With all of these aspects of the task, ScienceSource will seek community help. Linksedit
Editor Charles Matthews, for ContentMine. Please leave feedback for him. Back numbers are here. Reminder: WikiFactMine pages on Wikidata are at WD:WFM. ScienceSource pages will be announced there, and in this mass message. If you wish to receive no further issues of Facto Post, please remove your name from our mailing list. Alternatively, to opt out of all massmessage mailings, you may add Category:Wikipedians who opt out of message delivery to your user talk page.
Newsletter delivered by MediaWiki message delivery |
Facto Post – Issue 13 – 29 May 2018
editFacto Post – Issue 13 – 29 May 2018
The Editor is Charles Matthews, for ContentMine. Please leave feedback for him, on his User talk page.
To subscribe to Facto Post go to Wikipedia:Facto Post mailing list. For the ways to unsubscribe, see the footer.
Facto Post enters its second year, with a Cambridge Blue (OK, Aquamarine) background, a new logo, but no Cambridge blues. On-topic for the ScienceSource project is a project page here. It contains some case studies on how the WP:MEDRS guideline, for the referencing of articles at all related to human health, is applied in typical discussions. Close to home also, a template, called {{medrs}} for short, is used to express dissatisfaction with particular references. Technology can help with patrolling, and this Petscan query finds over 450 articles where there is at least one use of the template. Of course the template is merely suggesting there is a possible issue with the reliability of a reference. Deciding the truth of the allegation is another matter. This maintenance issue is one example of where ScienceSource aims to help. Where the reference is to a scientific paper, its type of algorithm could give a pass/fail opinion on such references. It could assist patrollers of medical articles, therefore, with the templated references and more generally. There may be more to proper referencing than that, indeed: context, quite what the statement supported by the reference expresses, prominence and weight. For that kind of consideration, case studies can help. But an algorithm might help to clear the backlog.
If you wish to receive no further issues of Facto Post, please remove your name from our mailing list. Alternatively, to opt out of all massmessage mailings, you may add Category:Wikipedians who opt out of message delivery to your user talk page.
Newsletter delivered by MediaWiki message delivery |
Facto Post – Issue 14 – 21 July 2018
editFacto Post – Issue 14 – 21 July 2018
The Editor is Charles Matthews, for ContentMine. Please leave feedback for him, on his User talk page.
To subscribe to Facto Post go to Wikipedia:Facto Post mailing list. For the ways to unsubscribe, see the footer.
Officially it is "bridging the gaps in knowledge", with Wikimania 2018 in Cape Town paying tribute to the southern African concept of ubuntu to implement it. Besides face-to-face interactions, Wikimedians do need their power sources. Facto Post interviewed Jdforrester, who has attended every Wikimania, and now works as Senior Product Manager for the Wikimedia Foundation. His take on tackling the gaps in the Wikimedia movement is that "if we were an army, we could march in a column and close up all the gaps". In his view though, that is a faulty metaphor, and it leads to a completely false misunderstanding of the movement, its diversity and different aspirations, and the nature of the work as "fighting" to be done in the open sector. There are many fronts, and as an eventualist he feels the gaps experienced both by editors and by users of Wikimedia content are inevitable. He would like to see a greater emphasis on reuse of content, not simply its volume. If that may not sound like radicalism, the Decolonizing the Internet conference here organized jointly with Whose Knowledge? can redress the picture. It comes with the claim to be "the first ever conference about centering marginalized knowledge online".
If you wish to receive no further issues of Facto Post, please remove your name from our mailing list. Alternatively, to opt out of all massmessage mailings, you may add Category:Wikipedians who opt out of message delivery to your user talk page.
Newsletter delivered by MediaWiki message delivery |
Ways to improve Manalisco
editThanks for creating Manalisco.
A New Page Patroller Boleyn just tagged the page as having some issues to fix, and wrote this note for you:
Please add your references.
The tags can be removed by you or another editor once the issues they mention are addressed. If you have questions, you can reply over here and ping me. Or, for broader editing help, you can talk to the volunteers at the Teahouse.
Delivered via the Page Curation tool, on behalf of the reviewer.
An article you recently created, Manalisco, does not have enough sources and citations as written to remain published. It needs more citations from reliable, independent sources. (?) Information that can't be referenced should be removed (verifiability is of central importance on Wikipedia). I've moved your draft to draftspace (with a prefix of "Draft:
" before the article title) where you can incubate the article with minimal disruption. When you feel the article meets Wikipedia's general notability guideline and thus is ready for mainspace, please click on the "Submit your draft for review!" button at the top of the page. Best, Barkeep49 (talk) 23:16, 24 November 2018 (UTC)
Hello, Maxlath. It has been over six months since you last edited the Articles for Creation submission or draft page you started, Draft:Manalisco.
In accordance with our policy that Wikipedia is not for the indefinite hosting of material deemed unsuitable for the encyclopedia mainspace, the draft has been deleted. If you wish to retrieve it, you can request its undeletion by following the instructions at this link. An administrator will, in most cases, restore the submission so you can continue to work on it. — JJMC89 (T·C) 22:16, 4 July 2019 (UTC)