Pages:
1
2 |
Rainwater
National Hazard
  
Posts: 987
Registered: 22-12-2021
Member Is Offline
Mood: Break'n glass & kick'n a's
|
|
I would stick with a bandwidth filter or service request limit.
For example, limit a single IP address to 2 requests congruently
1k/sec for pages, x/sec for images. That way we all share the information super highway.
The result for me would be the page loads as normal without my noticing anything different other than images/files loading slower.
And a bot archiving the site will simply take a few months without causing a dos.
"You can't do that" - challenge accepted
|
|
MrDoctor
Hazard to Self

Posts: 60
Registered: 5-7-2022
Member Is Offline
|
|
Quote: Originally posted by Rainwater  | I would stick with a bandwidth filter or service request limit.
For example, limit a single IP address to 2 requests congruently
1k/sec for pages, x/sec for images. That way we all share the information super highway.
The result for me would be the page loads as normal without my noticing anything different other than images/files loading slower.
And a bot archiving the site will simply take a few months without causing a dos. |
im not sure that is feasible, however, request-cooldowns IS reasonable, where it takes something like say, 5-10 seconds delay between consecutive
requests for page-loads.
I think the IT guys need to chime in here what is possible, rate limiting i think would require something a bit advanced.
another thing to note is, grok3 recently got released, grok can access the internet to conduct research, for certain kinds of chemistry questions i
found it quite helpful, not in predicting outcomes of reactions but using it like an advanced search engine giving a vague description of what i
wanted since i have no idea how people navigate scholarly articles and resources, i think chatgpt can too now. its possible the site gets pinged each
time its scanned regarding a new chemistry question, i assume those bots dont train on it nor do they store articles that are used/analyzed, the new
deep-seek or reasoning models try to generate more accurate responses by improving upon how real information is analyzed rather than simply totally
fabricating a response. if they arent being considerate or caching, then they could be the cause of this.
|
|
Pages:
1
2 |
|