Dear all,
"The Journal of Scientific Practice and Integrity" is a relevant good
website. Alas Clay Risen, "David Egilman, Doctor Who Took On Drug
Companies, Dies at 71", "The New-York Times", April 15, 2024,
HTTPS://WWW.NYTimes.com/2024/04/15/health/david-egilman-dead.html
reports that an editor thereof died. Alas,
"Notice of Journal Closure
January 26, 2024 EDT
We want to inform our readers and contributors that The Journal of
Scientific Practice and Integrity will be closed from this year. We will
not accept any more articles for publication henceforth. The articles published so far will be available for readers at jospi.org, as far as possible. We encourage contributors and readers to print their articles as
PDF files, and or download archives from the website of the journal so
that the articles will not be lost if we have to close the website due to
any reason in future.
It was our sincere effort to offer an open access, free journal for both contributors and readers over the last five years. We hope we were
successful in our efforts. We regret that we have to stop this activity
owing to unavoidable circumstances. We thank our contributors and readers
for the overwhelming response they gave us.
Editors"
says
HTTPS://WWW.JOSPI.org/post/2340-notice-of-journal-closure
so I attempted to copy "The Journal of Scientific Practice and Integrity"
to
HTTP://Gloucester.Insomnia247.NL/Evil_which_is_so-called_science/Journal_of_Scientific_Practice_and_Integrity/
I tried e.g.
pavuk -noRobots -mode mirror -preserve_time -index_name index_by_Pavuk.HTM -store_index -hack_add_index
https://www.jospi.org/ > Pavuk_www.jospi.org.stdout.txt 2> Pavuk_www.jospi.org.stderr.txt
and
httrack --robots=0 --mirror --near --extra-log --file-log --index --build-top-index --search-index '-*p3' --debug-headers WWW.JOSPI.org '+*'
and
lftp -e mirror
https://WWW.JOSPI.org
and
wget2 -e robots=off --output-file=Wget_logfile.txt --verbose --xattr --timestamping --server-response --mirror --page-requisites
--backup-converted WWW.JOSPI.org
and
wget -e robots=off --output-file=Wget_logfile.txt --verbose --xattr --rejected-log=rejected-log.txt --timestamping --server-response --mirror --page-requisites --backup-converted WWW.JOSPI.org
but they all failed miserably! Pwget is also not helpful.
Manually backing up hopefully everything from JOSPI.org via GUI
webbrowsers took myself nearly 3 hours to save circa 705 files!
Scholastica sucks!
If I missed anything then please warn me so that we can attempt to save
it.
Does anyone know the best way to back up a Scholastica website?
--- MBSE BBS v1.0.8.4 (Linux-x86_64)
* Origin: A noiseless patient Spider (3:633/280.2@fidonet)