Project

General

Profile

tickets #88903

Fwd: support@lists.opensuse.org - search index not up-to-ate

Added by docb@opensuse.org about 2 months ago. Updated 1 day ago.

Status:
Workable
Priority:
Normal
Assignee:
Category:
Mailing lists
Target version:
-
Start date:
2021-04-10
Due date:
% Done:

0%

Estimated time:
(Total: 0.00 h)

Description

Dear Admins,

can you pls have a look regarding the below issue?

I can confirm that I dont find newer entries than ~4 month old

Thanks
Axel

---------- Weitergeleitete Nachricht ----------

Betreff: support@lists.opensuse.org - Hier: Suche nach alten Beiträgen
Datum: Sonntag, 21. Februar 2021, 18:13:36 CET
Von: michael.kasimir@gmx.de
An: Axel Braun docb@opensuse.org

Hallo Axel,

ich habe gerade festgestellt, daß die Suche nach einem Thread vom 04.12.2020
zu
openSUSE Leap 15.2 - Scanner problem with Paper size

anscheinend nicht funktioniert.

Suchbegriffe:
Scanner problem
Paper size
xsane
etc.

Warum führt die Suche unter
https://lists.opensuse.org/archives/list/support@lists.opensuse.org/2020/12/
[1]

nach einzelnen Begriffen im Feld

Search this list

hier zu keinem Ergebnis?
Oder bin ich zu blöd dazu?

Mit freundlichen Grüßen / Kind regards

Michael Kasimir

Be Free, Be Linux


[1] https://lists.opensuse.org/archives/list/support@lists.opensuse.org/
2020/12/


--
Dr. Axel Braun docb@opensuse.org
Member of the openSUSE Board


Subtasks

tickets #90935: lists.opensuse.org search form doesn't workNew

History

#1 Updated by pjessen about 2 months ago

  • Subject changed from Fwd: support@lists.opensuse.org - Hier: Suche nach alten Beiträgen to Fwd: support@lists.opensuse.org - search index not up-to-ate
  • Private changed from Yes to No

Yes, I can confirm that, we have not yet been able to complete a full indexing run of the old archives. It is incredibly slow (days) and something is gobbling up too much memory, which means the indexing is kicked out by the oom killer. I thought I had already asked someone, maybe Lars?, to increase the amount of memory on mailman3, but I may have forgotten.

#2 Updated by lrupp about 2 months ago

  • Category set to Mailing lists
  • Status changed from New to In Progress
  • Assignee set to pjessen

pjessen wrote:

Yes, I can confirm that, we have not yet been able to complete a full indexing run of the old archives. It is incredibly slow (days) and something is gobbling up too much memory, which means the indexing is kicked out by the oom killer. I thought I had already asked someone, maybe Lars?, to increase the amount of memory on mailman3, but I may have forgotten.

mailman3 is now using 8GB of RAM and 6 CPUs. Let's see if this is enough to run a full index.

#3 Updated by pjessen about 2 months ago

Thanks Lars!
I started an indexing run today at 1300. Let us see how it goes.

#4 Updated by pjessen about 2 months ago

  • Status changed from In Progress to Feedback

Well, looks like it ran for almost 24hours before:

[ERROR/MainProcess] Failed indexing 1040001 - 1041000 (retry 5/5): Error writing block 3249080 (No space left on device) (pid 2487): Error writing block 3249080 (No space left on device)

Lars, vdb is full - I have no idea how much space we might need. If you can give me some more space (maybe double?) and reboot mailman3, it should be easy to grow it with xfs_growfs.

#5 Updated by lrupp about 1 month ago

  • % Done changed from 0 to 40

pjessen wrote:

Lars, vdb is full - I have no idea how much space we might need. If you can give me some more space (maybe double?) and reboot mailman3, it should be easy to grow it with xfs_growfs.

Reboot is not needed for xfs:
/dev/vdb 250G 101G 150G 41% /data

Back to you :-)

#6 Updated by pjessen about 1 month ago

I forgot to say thanks!

Well, my most recent indexing attempt looks like it completed, but when I search for something simple on the factory list, I get no results.

#7 Updated by pjessen about 1 month ago

Taking some notes:
Attempting to start over, I wanted to run 'clear_index'. This however fails, as shutil.rmtree does not work on symlinks.
I have changed the mounting setup slightly - /dev/vdb is now mounted directly on /var/lib/mailman_webui/xapian_index. (fstab not yet saltified).
One problem I had was getting it mounted with the right uid:gid of mailman:mailman, I don't know how to do that :-( (they are not valid mount options for xfs).
So, I am now (9 March 17:00) running a full update_index again (uid:mailman, dir: /var/lib/mailman_webui, 'python3 manage.py update_index'). For the time being I have disabled the hourly jobs (crontab -e -u mailman). Btw, they seem to specified twice ??

#8 Updated by pjessen about 1 month ago

Have closed related issue #87701

#9 Updated by pjessen 28 days ago

  • Status changed from Feedback to In Progress

Well, I have been busy elsewhere, unfortunately the indexing was killed by the oom killer on 11 March 00:15 UTC.

[Thu Mar 11 00:15:28 2021] Out of memory: Killed process 13888 (python3) total-vm:3184832kB, anon-rss:2995320kB, file-rss:0kB, shmem-rss:0kB
[Thu Mar 11 00:15:28 2021] oom_reaper: reaped process 13888 (python3), now anon-rss:0kB, file-rss:0kB, shmem-rss:0kB

Next, I'm going to try it with the process disabled for oom killing.

#10 Updated by pjessen 28 days ago

Process 10036, "echo -17 >/proc/10036/oom_adj".

#11 Updated by pjessen 26 days ago

Status just about 2 days later, from 'top' :

  PID USER      PR  NI    VIRT    RES    SHR S  %CPU  %MEM     TIME+ COMMAND                                                                                 
10036 mailman   20   0 6241724 5.776g   4116 D 13.29 75.72 843:17.77 python3                                                                                

Still using copious amounts of memory.

#12 Updated by pjessen 24 days ago

Adjusting oom_adj seems to have done the trick - other processes are being killed, but not my indexer:

[Tue Mar 16 12:35:21 2021] Out of memory: Killed process 8222 (uwsgi) total-vm:1502140kB, anon-rss:1263228kB, file-rss:0kB, shmem-rss:116kB
[Tue Mar 16 12:40:44 2021] Out of memory: Killed process 16518 (uwsgi) total-vm:1503516kB, anon-rss:1305548kB, file-rss:0kB, shmem-rss:116kB
[Tue Mar 16 14:34:23 2021] Out of memory: Killed process 16580 (uwsgi) total-vm:1025444kB, anon-rss:794596kB, file-rss:0kB, shmem-rss:116kB
[Tue Mar 16 14:35:00 2021] Out of memory: Killed process 17987 (uwsgi) total-vm:1042868kB, anon-rss:795520kB, file-rss:0kB, shmem-rss:112kB
[Tue Mar 16 14:35:59 2021] Out of memory: Killed process 17990 (uwsgi) total-vm:1042792kB, anon-rss:790036kB, file-rss:0kB, shmem-rss:112kB
[Tue Mar 16 14:37:06 2021] Out of memory: Killed process 18007 (uwsgi) total-vm:940696kB, anon-rss:690672kB, file-rss:0kB, shmem-rss:112kB
[Tue Mar 16 14:40:07 2021] Out of memory: Killed process 18017 (uwsgi) total-vm:886860kB, anon-rss:687180kB, file-rss:0kB, shmem-rss:112kB
[Tue Mar 16 14:45:03 2021] Out of memory: Killed process 18050 (uwsgi) total-vm:518488kB, anon-rss:318396kB, file-rss:0kB, shmem-rss:116kB
[Tue Mar 16 14:46:30 2021] Out of memory: Killed process 18107 (uwsgi) total-vm:518356kB, anon-rss:270276kB, file-rss:0kB, shmem-rss:116kB
[Tue Mar 16 15:03:08 2021] Out of memory: Killed process 18178 (uwsgi) total-vm:641932kB, anon-rss:394652kB, file-rss:0kB, shmem-rss:116kB
[Tue Mar 16 15:16:16 2021] Out of memory: Killed process 24913 (python3) total-vm:357536kB, anon-rss:118196kB, file-rss:0kB, shmem-rss:20kB
[Tue Mar 16 15:16:36 2021] Out of memory: Killed process 18391 (uwsgi) total-vm:545860kB, anon-rss:299544kB, file-rss:0kB, shmem-rss:116kB
[Tue Mar 16 15:31:33 2021] Out of memory: Killed process 18556 (uwsgi) total-vm:521936kB, anon-rss:293136kB, file-rss:0kB, shmem-rss:116kB
[Tue Mar 16 15:37:14 2021] Out of memory: Killed process 18725 (uwsgi) total-vm:624424kB, anon-rss:424416kB, file-rss:0kB, shmem-rss:112kB
[Tue Mar 16 15:37:24 2021] Out of memory: Killed process 18789 (uwsgi) total-vm:628112kB, anon-rss:427080kB, file-rss:0kB, shmem-rss:112kB
[Tue Mar 16 15:37:40 2021] Out of memory: Killed process 18791 (uwsgi) total-vm:627124kB, anon-rss:426184kB, file-rss:0kB, shmem-rss:112kB
[Tue Mar 16 15:37:54 2021] Out of memory: Killed process 18793 (uwsgi) total-vm:629040kB, anon-rss:426284kB, file-rss:20kB, shmem-rss:112kB
[Tue Mar 16 15:38:07 2021] Out of memory: Killed process 18795 (uwsgi) total-vm:546988kB, anon-rss:347864kB, file-rss:0kB, shmem-rss:116kB
[Tue Mar 16 15:38:30 2021] Out of memory: Killed process 18800 (uwsgi) total-vm:575584kB, anon-rss:374892kB, file-rss:0kB, shmem-rss:112kB
[Tue Mar 16 15:39:32 2021] Out of memory: Killed process 18807 (uwsgi) total-vm:565968kB, anon-rss:365340kB, file-rss:120kB, shmem-rss:116kB
[Tue Mar 16 15:46:39 2021] Out of memory: Killed process 18817 (uwsgi) total-vm:579104kB, anon-rss:333344kB, file-rss:0kB, shmem-rss:112kB
[Tue Mar 16 16:01:39 2021] Out of memory: Killed process 18914 (uwsgi) total-vm:603248kB, anon-rss:353320kB, file-rss:0kB, shmem-rss:116kB
[Tue Mar 16 16:16:27 2021] Out of memory: Killed process 19089 (uwsgi) total-vm:598240kB, anon-rss:353472kB, file-rss:0kB, shmem-rss:116kB
[Tue Mar 16 16:31:29 2021] Out of memory: Killed process 19281 (uwsgi) total-vm:603024kB, anon-rss:354440kB, file-rss:0kB, shmem-rss:116kB
[Tue Mar 16 16:35:38 2021] Out of memory: Killed process 19452 (uwsgi) total-vm:515120kB, anon-rss:310860kB, file-rss:0kB, shmem-rss:116kB
[Tue Mar 16 16:37:07 2021] Out of memory: Killed process 19490 (uwsgi) total-vm:494820kB, anon-rss:295900kB, file-rss:0kB, shmem-rss:112kB
[Tue Mar 16 16:46:30 2021] Out of memory: Killed process 19503 (uwsgi) total-vm:588940kB, anon-rss:360108kB, file-rss:0kB, shmem-rss:112kB
[Tue Mar 16 16:59:30 2021] Out of memory: Killed process 19610 (uwsgi) total-vm:599740kB, anon-rss:402044kB, file-rss:0kB, shmem-rss:112kB
[Tue Mar 16 17:01:39 2021] Out of memory: Killed process 19734 (uwsgi) total-vm:630372kB, anon-rss:403740kB, file-rss:0kB, shmem-rss:112kB
[Tue Mar 16 17:11:29 2021] Out of memory: Killed process 8148 (python3) total-vm:290248kB, anon-rss:114788kB, file-rss:0kB, shmem-rss:0kB
[Tue Mar 16 17:17:25 2021] Out of memory: Killed process 19782 (uwsgi) total-vm:638676kB, anon-rss:396036kB, file-rss:0kB, shmem-rss:116kB
[Tue Mar 16 17:32:09 2021] Out of memory: Killed process 19963 (uwsgi) total-vm:663012kB, anon-rss:416100kB, file-rss:0kB, shmem-rss:112kB
[Tue Mar 16 17:33:21 2021] Out of memory: Killed process 20160 (uwsgi) total-vm:705708kB, anon-rss:504576kB, file-rss:0kB, shmem-rss:112kB
[Tue Mar 16 17:34:43 2021] Out of memory: Killed process 20177 (uwsgi) total-vm:711088kB, anon-rss:510672kB, file-rss:420kB, shmem-rss:112kB
[Tue Mar 16 17:35:07 2021] Out of memory: Killed process 20188 (uwsgi) total-vm:703720kB, anon-rss:502764kB, file-rss:0kB, shmem-rss:112kB
[Tue Mar 16 17:43:43 2021] Out of memory: Killed process 20195 (uwsgi) total-vm:689232kB, anon-rss:487500kB, file-rss:0kB, shmem-rss:112kB
[Tue Mar 16 17:44:00 2021] Out of memory: Killed process 20390 (uwsgi) total-vm:692984kB, anon-rss:491716kB, file-rss:0kB, shmem-rss:112kB
[Tue Mar 16 17:49:34 2021] Out of memory: Killed process 20395 (uwsgi) total-vm:615860kB, anon-rss:376948kB, file-rss:0kB, shmem-rss:112kB
[Tue Mar 16 18:01:45 2021] Out of memory: Killed process 20574 (uwsgi) total-vm:656248kB, anon-rss:408032kB, file-rss:0kB, shmem-rss:112kB
[Tue Mar 16 18:16:52 2021] Out of memory: Killed process 21565 (uwsgi) total-vm:687024kB, anon-rss:458812kB, file-rss:0kB, shmem-rss:112kB
[Tue Mar 16 19:43:30 2021] Out of memory: Killed process 31188 (uwsgi) total-vm:1197676kB, anon-rss:999528kB, file-rss:0kB, shmem-rss:124kB

#13 Updated by pjessen 21 days ago

Well, I forgot to look in on it for a couple of days. The indexing abended 19 March 04:48:

mailman3 (lists.o.o):/var/lib/mailman_webui # tail  nohup.out
    last_max_pk=max_pk)
  File "/usr/lib/python3.6/site-packages/haystack/management/commands/update_index.py", line 97, in do_update
    backend.update(index, current_qs, commit=commit)
  File "/usr/lib/python3.6/site-packages/xapian_backend.py", line 495, in update
    database.close()
xapian.DatabaseError: Error writing block 3249080 (No space left on device)

The xapian filesystem has 46% free, so presumably that is not the problem. I have no idea what else it might be.
I have just restarted it, with oom_adj. Process 17041.

#14 Updated by pjessen 12 days ago

Have just restarted it again, it ran until 25 March 10:50, difficult to say why it stopped.
I'm thinking of scripting it, but I can't think of a good stop-condition.

#15 Updated by pjessen 7 days ago

  • Due date set to 2021-05-04
  • Status changed from In Progress to Workable

Well, this is getting more and more annoying. I am re-enabling the hourly cronjob, then we'll have to revisit this later.

Also available in: Atom PDF