Project

General

Profile

Actions

tickets #88903

closed

Fwd: support@lists.opensuse.org - search index not up-to-date

Added by docb@opensuse.org over 3 years ago. Updated 4 months ago.

Status:
Resolved
Priority:
Normal
Assignee:
Category:
Mailing lists
Target version:
-
Start date:
2021-04-10
Due date:
% Done:

100%

Estimated time:
(Total: 0.00 h)

Description

Dear Admins,

can you pls have a look regarding the below issue?

I can confirm that I dont find newer entries than ~4 month old

Thanks
Axel

---------- Weitergeleitete Nachricht ----------

Betreff: support@lists.opensuse.org - Hier: Suche nach alten Beiträgen
Datum: Sonntag, 21. Februar 2021, 18:13:36 CET
Von: michael.kasimir@gmx.de
An: Axel Braun docb@opensuse.org

Hallo Axel,

ich habe gerade festgestellt, daß die Suche nach einem Thread vom 04.12.2020
zu
openSUSE Leap 15.2 - Scanner problem with Paper size

anscheinend nicht funktioniert.

Suchbegriffe:
Scanner problem
Paper size
xsane
etc.

Warum führt die Suche unter
https://lists.opensuse.org/archives/list/support@lists.opensuse.org/2020/12/
[1]

nach einzelnen Begriffen im Feld

Search this list

hier zu keinem Ergebnis?
Oder bin ich zu blöd dazu?

Mit freundlichen Grüßen / Kind regards

Michael Kasimir

Be Free, Be Linux


[1] https://lists.opensuse.org/archives/list/support@lists.opensuse.org/
2020/12/


--
Dr. Axel Braun docb@opensuse.org
Member of the openSUSE Board


Subtasks 1 (0 open1 closed)

tickets #90935: lists.opensuse.org search form doesn't workClosed2021-04-10

Actions

Related issues 1 (0 open1 closed)

Related to openSUSE admin - tickets #159873: Rework Mailman archive searchResolvedcrameleon2023-05-252023-05-25

Actions
Actions #1

Updated by pjessen over 3 years ago

  • Subject changed from Fwd: support@lists.opensuse.org - Hier: Suche nach alten Beiträgen to Fwd: support@lists.opensuse.org - search index not up-to-ate
  • Private changed from Yes to No

Yes, I can confirm that, we have not yet been able to complete a full indexing run of the old archives. It is incredibly slow (days) and something is gobbling up too much memory, which means the indexing is kicked out by the oom killer. I thought I had already asked someone, maybe Lars?, to increase the amount of memory on mailman3, but I may have forgotten.

Actions #2

Updated by lrupp over 3 years ago

  • Category set to Mailing lists
  • Status changed from New to In Progress
  • Assignee set to pjessen

pjessen wrote:

Yes, I can confirm that, we have not yet been able to complete a full indexing run of the old archives. It is incredibly slow (days) and something is gobbling up too much memory, which means the indexing is kicked out by the oom killer. I thought I had already asked someone, maybe Lars?, to increase the amount of memory on mailman3, but I may have forgotten.

mailman3 is now using 8GB of RAM and 6 CPUs. Let's see if this is enough to run a full index.

Actions #3

Updated by pjessen over 3 years ago

Thanks Lars!
I started an indexing run today at 1300. Let us see how it goes.

Actions #4

Updated by pjessen over 3 years ago

  • Status changed from In Progress to Feedback

Well, looks like it ran for almost 24hours before:

[ERROR/MainProcess] Failed indexing 1040001 - 1041000 (retry 5/5): Error writing block 3249080 (No space left on device) (pid 2487): Error writing block 3249080 (No space left on device)

Lars, vdb is full - I have no idea how much space we might need. If you can give me some more space (maybe double?) and reboot mailman3, it should be easy to grow it with xfs_growfs.

Actions #5

Updated by lrupp over 3 years ago

  • % Done changed from 0 to 40

pjessen wrote:

Lars, vdb is full - I have no idea how much space we might need. If you can give me some more space (maybe double?) and reboot mailman3, it should be easy to grow it with xfs_growfs.

Reboot is not needed for xfs:
/dev/vdb 250G 101G 150G 41% /data

Back to you :-)

Actions #6

Updated by pjessen over 3 years ago

I forgot to say thanks!

Well, my most recent indexing attempt looks like it completed, but when I search for something simple on the factory list, I get no results.

Actions #7

Updated by pjessen over 3 years ago

Taking some notes:
Attempting to start over, I wanted to run 'clear_index'. This however fails, as shutil.rmtree does not work on symlinks.
I have changed the mounting setup slightly - /dev/vdb is now mounted directly on /var/lib/mailman_webui/xapian_index. (fstab not yet saltified).
One problem I had was getting it mounted with the right uid:gid of mailman:mailman, I don't know how to do that :-( (they are not valid mount options for xfs).
So, I am now (9 March 17:00) running a full update_index again (uid:mailman, dir: /var/lib/mailman_webui, 'python3 manage.py update_index'). For the time being I have disabled the hourly jobs (crontab -e -u mailman). Btw, they seem to specified twice ??

Actions #8

Updated by pjessen over 3 years ago

Have closed related issue #87701

Actions #9

Updated by pjessen over 3 years ago

  • Status changed from Feedback to In Progress

Well, I have been busy elsewhere, unfortunately the indexing was killed by the oom killer on 11 March 00:15 UTC.

[Thu Mar 11 00:15:28 2021] Out of memory: Killed process 13888 (python3) total-vm:3184832kB, anon-rss:2995320kB, file-rss:0kB, shmem-rss:0kB
[Thu Mar 11 00:15:28 2021] oom_reaper: reaped process 13888 (python3), now anon-rss:0kB, file-rss:0kB, shmem-rss:0kB

Next, I'm going to try it with the process disabled for oom killing.

Actions #10

Updated by pjessen over 3 years ago

Process 10036, "echo -17 >/proc/10036/oom_adj".

Actions #11

Updated by pjessen over 3 years ago

Status just about 2 days later, from 'top' :

  PID USER      PR  NI    VIRT    RES    SHR S  %CPU  %MEM     TIME+ COMMAND                                                                                 
10036 mailman   20   0 6241724 5.776g   4116 D 13.29 75.72 843:17.77 python3                                                                                

Still using copious amounts of memory.

Actions #12

Updated by pjessen over 3 years ago

Adjusting oom_adj seems to have done the trick - other processes are being killed, but not my indexer:

[Tue Mar 16 12:35:21 2021] Out of memory: Killed process 8222 (uwsgi) total-vm:1502140kB, anon-rss:1263228kB, file-rss:0kB, shmem-rss:116kB
[Tue Mar 16 12:40:44 2021] Out of memory: Killed process 16518 (uwsgi) total-vm:1503516kB, anon-rss:1305548kB, file-rss:0kB, shmem-rss:116kB
[Tue Mar 16 14:34:23 2021] Out of memory: Killed process 16580 (uwsgi) total-vm:1025444kB, anon-rss:794596kB, file-rss:0kB, shmem-rss:116kB
[Tue Mar 16 14:35:00 2021] Out of memory: Killed process 17987 (uwsgi) total-vm:1042868kB, anon-rss:795520kB, file-rss:0kB, shmem-rss:112kB
[Tue Mar 16 14:35:59 2021] Out of memory: Killed process 17990 (uwsgi) total-vm:1042792kB, anon-rss:790036kB, file-rss:0kB, shmem-rss:112kB
[Tue Mar 16 14:37:06 2021] Out of memory: Killed process 18007 (uwsgi) total-vm:940696kB, anon-rss:690672kB, file-rss:0kB, shmem-rss:112kB
[Tue Mar 16 14:40:07 2021] Out of memory: Killed process 18017 (uwsgi) total-vm:886860kB, anon-rss:687180kB, file-rss:0kB, shmem-rss:112kB
[Tue Mar 16 14:45:03 2021] Out of memory: Killed process 18050 (uwsgi) total-vm:518488kB, anon-rss:318396kB, file-rss:0kB, shmem-rss:116kB
[Tue Mar 16 14:46:30 2021] Out of memory: Killed process 18107 (uwsgi) total-vm:518356kB, anon-rss:270276kB, file-rss:0kB, shmem-rss:116kB
[Tue Mar 16 15:03:08 2021] Out of memory: Killed process 18178 (uwsgi) total-vm:641932kB, anon-rss:394652kB, file-rss:0kB, shmem-rss:116kB
[Tue Mar 16 15:16:16 2021] Out of memory: Killed process 24913 (python3) total-vm:357536kB, anon-rss:118196kB, file-rss:0kB, shmem-rss:20kB
[Tue Mar 16 15:16:36 2021] Out of memory: Killed process 18391 (uwsgi) total-vm:545860kB, anon-rss:299544kB, file-rss:0kB, shmem-rss:116kB
[Tue Mar 16 15:31:33 2021] Out of memory: Killed process 18556 (uwsgi) total-vm:521936kB, anon-rss:293136kB, file-rss:0kB, shmem-rss:116kB
[Tue Mar 16 15:37:14 2021] Out of memory: Killed process 18725 (uwsgi) total-vm:624424kB, anon-rss:424416kB, file-rss:0kB, shmem-rss:112kB
[Tue Mar 16 15:37:24 2021] Out of memory: Killed process 18789 (uwsgi) total-vm:628112kB, anon-rss:427080kB, file-rss:0kB, shmem-rss:112kB
[Tue Mar 16 15:37:40 2021] Out of memory: Killed process 18791 (uwsgi) total-vm:627124kB, anon-rss:426184kB, file-rss:0kB, shmem-rss:112kB
[Tue Mar 16 15:37:54 2021] Out of memory: Killed process 18793 (uwsgi) total-vm:629040kB, anon-rss:426284kB, file-rss:20kB, shmem-rss:112kB
[Tue Mar 16 15:38:07 2021] Out of memory: Killed process 18795 (uwsgi) total-vm:546988kB, anon-rss:347864kB, file-rss:0kB, shmem-rss:116kB
[Tue Mar 16 15:38:30 2021] Out of memory: Killed process 18800 (uwsgi) total-vm:575584kB, anon-rss:374892kB, file-rss:0kB, shmem-rss:112kB
[Tue Mar 16 15:39:32 2021] Out of memory: Killed process 18807 (uwsgi) total-vm:565968kB, anon-rss:365340kB, file-rss:120kB, shmem-rss:116kB
[Tue Mar 16 15:46:39 2021] Out of memory: Killed process 18817 (uwsgi) total-vm:579104kB, anon-rss:333344kB, file-rss:0kB, shmem-rss:112kB
[Tue Mar 16 16:01:39 2021] Out of memory: Killed process 18914 (uwsgi) total-vm:603248kB, anon-rss:353320kB, file-rss:0kB, shmem-rss:116kB
[Tue Mar 16 16:16:27 2021] Out of memory: Killed process 19089 (uwsgi) total-vm:598240kB, anon-rss:353472kB, file-rss:0kB, shmem-rss:116kB
[Tue Mar 16 16:31:29 2021] Out of memory: Killed process 19281 (uwsgi) total-vm:603024kB, anon-rss:354440kB, file-rss:0kB, shmem-rss:116kB
[Tue Mar 16 16:35:38 2021] Out of memory: Killed process 19452 (uwsgi) total-vm:515120kB, anon-rss:310860kB, file-rss:0kB, shmem-rss:116kB
[Tue Mar 16 16:37:07 2021] Out of memory: Killed process 19490 (uwsgi) total-vm:494820kB, anon-rss:295900kB, file-rss:0kB, shmem-rss:112kB
[Tue Mar 16 16:46:30 2021] Out of memory: Killed process 19503 (uwsgi) total-vm:588940kB, anon-rss:360108kB, file-rss:0kB, shmem-rss:112kB
[Tue Mar 16 16:59:30 2021] Out of memory: Killed process 19610 (uwsgi) total-vm:599740kB, anon-rss:402044kB, file-rss:0kB, shmem-rss:112kB
[Tue Mar 16 17:01:39 2021] Out of memory: Killed process 19734 (uwsgi) total-vm:630372kB, anon-rss:403740kB, file-rss:0kB, shmem-rss:112kB
[Tue Mar 16 17:11:29 2021] Out of memory: Killed process 8148 (python3) total-vm:290248kB, anon-rss:114788kB, file-rss:0kB, shmem-rss:0kB
[Tue Mar 16 17:17:25 2021] Out of memory: Killed process 19782 (uwsgi) total-vm:638676kB, anon-rss:396036kB, file-rss:0kB, shmem-rss:116kB
[Tue Mar 16 17:32:09 2021] Out of memory: Killed process 19963 (uwsgi) total-vm:663012kB, anon-rss:416100kB, file-rss:0kB, shmem-rss:112kB
[Tue Mar 16 17:33:21 2021] Out of memory: Killed process 20160 (uwsgi) total-vm:705708kB, anon-rss:504576kB, file-rss:0kB, shmem-rss:112kB
[Tue Mar 16 17:34:43 2021] Out of memory: Killed process 20177 (uwsgi) total-vm:711088kB, anon-rss:510672kB, file-rss:420kB, shmem-rss:112kB
[Tue Mar 16 17:35:07 2021] Out of memory: Killed process 20188 (uwsgi) total-vm:703720kB, anon-rss:502764kB, file-rss:0kB, shmem-rss:112kB
[Tue Mar 16 17:43:43 2021] Out of memory: Killed process 20195 (uwsgi) total-vm:689232kB, anon-rss:487500kB, file-rss:0kB, shmem-rss:112kB
[Tue Mar 16 17:44:00 2021] Out of memory: Killed process 20390 (uwsgi) total-vm:692984kB, anon-rss:491716kB, file-rss:0kB, shmem-rss:112kB
[Tue Mar 16 17:49:34 2021] Out of memory: Killed process 20395 (uwsgi) total-vm:615860kB, anon-rss:376948kB, file-rss:0kB, shmem-rss:112kB
[Tue Mar 16 18:01:45 2021] Out of memory: Killed process 20574 (uwsgi) total-vm:656248kB, anon-rss:408032kB, file-rss:0kB, shmem-rss:112kB
[Tue Mar 16 18:16:52 2021] Out of memory: Killed process 21565 (uwsgi) total-vm:687024kB, anon-rss:458812kB, file-rss:0kB, shmem-rss:112kB
[Tue Mar 16 19:43:30 2021] Out of memory: Killed process 31188 (uwsgi) total-vm:1197676kB, anon-rss:999528kB, file-rss:0kB, shmem-rss:124kB
Actions #13

Updated by pjessen over 3 years ago

Well, I forgot to look in on it for a couple of days. The indexing abended 19 March 04:48:

mailman3 (lists.o.o):/var/lib/mailman_webui # tail  nohup.out
    last_max_pk=max_pk)
  File "/usr/lib/python3.6/site-packages/haystack/management/commands/update_index.py", line 97, in do_update
    backend.update(index, current_qs, commit=commit)
  File "/usr/lib/python3.6/site-packages/xapian_backend.py", line 495, in update
    database.close()
xapian.DatabaseError: Error writing block 3249080 (No space left on device)

The xapian filesystem has 46% free, so presumably that is not the problem. I have no idea what else it might be.
I have just restarted it, with oom_adj. Process 17041.

Actions #14

Updated by pjessen over 3 years ago

Have just restarted it again, it ran until 25 March 10:50, difficult to say why it stopped.
I'm thinking of scripting it, but I can't think of a good stop-condition.

Actions #15

Updated by pjessen over 3 years ago

  • Due date set to 2021-05-04
  • Status changed from In Progress to Workable

Well, this is getting more and more annoying. I am re-enabling the hourly cronjob, then we'll have to revisit this later.

Actions #16

Updated by hellcp over 3 years ago

  • Status changed from Workable to Feedback
  • Assignee changed from pjessen to hellcp

I run the index year by year like so:

python3 manage.py update_index -v 3 -s 2020-12-01 -e 2021-12-01

for every year since 1990, and I was able to find posts from 2 hours ago in the search results, so I assume this is now fixed

Actions #17

Updated by pjessen over 3 years ago

Better to test than to assume :-)

Looking at factory.lists.o.o, the latest message was "Can we let the LLVM metapackage diverge ...." - from 20 April 01:27. Before that one, it was a Tumbleweed snapshot, 20210418.

Searching for 'LLVM' - latest result is from two days ago.
Searching for 'metapackage' - latest is from 3 months ago
Searching for '20210418' - no hits.

users.lists.o.o -

searching for privoxy, latest result is 3 months old, but there was a thread about privoxy yesterday.
searching for 'playing', to find my own posting from yesterday, latest is 5 days old.

packaging.lists.o.o -

seven posts about 'imagewriter' from yesterday and today were not found.

Something isn't quite right - I read somewhere (Archwiki) that the index is updated every minute?

This is from the every minute cronjob:

Message 29100:
From mailman@mailman3.infra.opensuse.org  Tue Apr 20 06:37:04 2021
X-Original-To: mailman
Delivered-To: mailman@mailman3.infra.opensuse.org
From: "(Cron Daemon)" <mailman@mailman3.infra.opensuse.org>
To: mailman@mailman3.infra.opensuse.org
Subject: Cron <mailman@mailman3> django-admin runjobs minutely  --pythonpath /var/lib/mailman_webui --settings settings
Content-Type: text/plain; charset=UTF-8
Auto-Submitted: auto-generated
Precedence: bulk
X-Cron-Env: <XDG_SESSION_ID=115921>
X-Cron-Env: <XDG_RUNTIME_DIR=/run/user/4200>
X-Cron-Env: <DBUS_SESSION_BUS_ADDRESS=unix:path=/run/user/4200/bus>
X-Cron-Env: <LANG=en_US.UTF-8>
X-Cron-Env: <SHELL=/bin/sh>
X-Cron-Env: <HOME=/var/lib/mailman>
X-Cron-Env: <PATH=/usr/bin:/bin>
X-Cron-Env: <LOGNAME=mailman>
X-Cron-Env: <USER=mailman>
Date: Tue, 20 Apr 2021 06:37:03 +0000 (UTC)

Error: None of the config files exist.

This appears to be the setting file:
-rw-r--r-- 1 mailman mailman 7303 Mar 21 19:49 /var/lib/mailman_webui/settings.py

Actions #18

Updated by hellcp over 3 years ago

Error: None of the config files exist.

only happens because we aren't using TOML config as per: https://github.com/maxking/django-settings-toml/blob/master/django_settings_toml.py#L41-L68

since we use config in settings.py, that will always be triggered, but isn't an actual issue :D

If you run python3 manage.py runjobs -l in /var/lib/mailman_webui as mailman user, you will see how often and what jobs are run in cron.

Actions #19

Updated by pjessen over 3 years ago

hellcp wrote:

Error: None of the config files exist.

only happens because we aren't using TOML config as per: https://github.com/maxking/django-settings-toml/blob/master/django_settings_toml.py#L41-L68

since we use config in settings.py, that will always be triggered, but isn't an actual issue :D

Okay - except perhaps for the mails filling up the mailbox - currently about 30K mails :-) Can't we just disable those cron jobs then?

If you run python3 manage.py runjobs -l in /var/lib/mailman_webui as mailman user, you will see how often and what jobs are run in cron.

So the index is being run once an hour, sounds good - except it's not indexing?

Actions #20

Updated by hellcp over 3 years ago

pjessen wrote:

Okay - except perhaps for the mails filling up the mailbox - currently about 30K mails :-) Can't we just disable those cron jobs then?

I fixed that, it shouldn't output anything anymore, also we need those cronjobs for operation of the lists.

So the index is being run once an hour, sounds good - except it's not indexing?

Yeah, I don't really understand why that's not working rn

Actions #21

Updated by pjessen over 3 years ago

  • Subject changed from Fwd: support@lists.opensuse.org - search index not up-to-ate to Fwd: support@lists.opensuse.org - search index not up-to-date

Just some stuff I found in the mail:

From mailman@mailman3.infra.opensuse.org  Wed Apr 21 13:00:52 2021
Return-Path: <mailman@mailman3.infra.opensuse.org>
X-Original-To: mailman
Delivered-To: mailman@mailman3.infra.opensuse.org
Received: by mailman3.infra.opensuse.org (Postfix, from userid 4200)
        id D3EAB37D5; Wed, 21 Apr 2021 13:00:52 +0000 (UTC)
From: "(Cron Daemon)" <mailman@mailman3.infra.opensuse.org>
To: mailman@mailman3.infra.opensuse.org
Subject: Cron <mailman@mailman3> django-admin runjobs hourly  --pythonpath /var/lib/mailman_webui --settings settings
Content-Type: text/plain; charset=UTF-8
Auto-Submitted: auto-generated
Precedence: bulk
X-Cron-Env: <XDG_SESSION_ID=117934>
X-Cron-Env: <XDG_RUNTIME_DIR=/run/user/4200>
X-Cron-Env: <DBUS_SESSION_BUS_ADDRESS=unix:path=/run/user/4200/bus>
X-Cron-Env: <LANG=en_US.UTF-8>
X-Cron-Env: <SHELL=/bin/sh>
X-Cron-Env: <HOME=/var/lib/mailman>
X-Cron-Env: <PATH=/usr/bin:/bin>
X-Cron-Env: <LOGNAME=mailman>
X-Cron-Env: <USER=mailman>
Message-Id: <20210421130052.D3EAB37D5@mailman3.infra.opensuse.org>
Date: Wed, 21 Apr 2021 13:00:52 +0000 (UTC)
Status: RO

[ERROR/MainProcess] Failed indexing 1 - 1 (retry 5/5): Unable to get write lock on /var/lib/mailman_webui/xapian_index: already locked (pid 14472): Unable to
 get write lock on /var/lib/mailman_webui/xapian_index: already locked
Traceback (most recent call last):
  File "/usr/lib/python3.6/site-packages/haystack/management/commands/update_index.py", line 111, in do_update
    backend.update(index, current_qs, commit=commit)
  File "/usr/lib/python3.6/site-packages/xapian_backend.py", line 283, in update
    database = self._database(writable=True)
  File "/usr/lib/python3.6/site-packages/xapian_backend.py", line 1178, in _database
    database = xapian.WritableDatabase(self.path, xapian.DB_CREATE_OR_OPEN)
  File "/usr/lib64/python3.6/site-packages/xapian/__init__.py", line 9205, in __init__
    _xapian.WritableDatabase_swiginit(self, _xapian.new_WritableDatabase(*args))
xapian.DatabaseLockError: Unable to get write lock on /var/lib/mailman_webui/xapian_index: already locke
From mailman@mailman3.infra.opensuse.org  Thu Apr 22 00:00:07 2021
Return-Path: <mailman@mailman3.infra.opensuse.org>
X-Original-To: mailman
Delivered-To: mailman@mailman3.infra.opensuse.org
Received: by mailman3.infra.opensuse.org (Postfix, from userid 4200)
        id 404A686C8; Thu, 22 Apr 2021 00:00:07 +0000 (UTC)
From: "(Cron Daemon)" <mailman@mailman3.infra.opensuse.org>
To: mailman@mailman3.infra.opensuse.org
Subject: Cron <mailman@mailman3> /usr/bin/mailman digests --periodic
Content-Type: text/plain; charset=UTF-8
Auto-Submitted: auto-generated
Precedence: bulk
X-Cron-Env: <XDG_SESSION_ID=118661>
X-Cron-Env: <XDG_RUNTIME_DIR=/run/user/4200>
X-Cron-Env: <DBUS_SESSION_BUS_ADDRESS=unix:path=/run/user/4200/bus>
X-Cron-Env: <LANG=en_US.UTF-8>
X-Cron-Env: <SHELL=/bin/sh>
X-Cron-Env: <HOME=/var/lib/mailman>
X-Cron-Env: <PATH=/usr/bin:/bin>
X-Cron-Env: <LOGNAME=mailman>
X-Cron-Env: <USER=mailman>
Message-Id: <20210422000007.404A686C8@mailman3.infra.opensuse.org>
Date: Thu, 22 Apr 2021 00:00:07 +0000 (UTC)
Status: RO

Traceback (most recent call last):
  File "/usr/lib/python3.6/site-packages/flufl/lock/_lockfile.py", line 321, in lock
    os.link(self._claimfile, self._lockfile)
FileExistsError: [Errno 17] File exists: '/var/lock/mailman/dbcreate.lck|mailman3.infra.opensuse.org|19146|5602599306108747932' -> '/var/lock/mailman/dbcreate.lck'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/bin/mailman", line 11, in <module>
    load_entry_point('mailman==3.3.4', 'console_scripts', 'mailman')()
  File "/usr/lib/python3.6/site-packages/click/core.py", line 829, in __call__
    return self.main(*args, **kwargs)
  File "/usr/lib/python3.6/site-packages/click/core.py", line 781, in main
    with self.make_context(prog_name, args, **extra) as ctx:
  File "/usr/lib/python3.6/site-packages/click/core.py", line 700, in make_context
    self.parse_args(ctx, args)
  File "/usr/lib/python3.6/site-packages/click/core.py", line 1212, in parse_args
    rest = Command.parse_args(self, ctx, args)
  File "/usr/lib/python3.6/site-packages/click/core.py", line 1048, in parse_args
    value, args = param.handle_parse_result(ctx, opts, args)
  File "/usr/lib/python3.6/site-packages/click/core.py", line 1630, in handle_parse_result
    value = invoke_param_callback(self.callback, ctx, self, value)
  File "/usr/lib/python3.6/site-packages/click/core.py", line 123, in invoke_param_callback
    return callback(ctx, param, value)
  File "/usr/lib/python3.6/site-packages/mailman/bin/mailman.py", line 94, in initialize_config
    initialize(value)
  File "/usr/lib/python3.6/site-packages/mailman/core/initialize.py", line 218, in initialize
    initialize_2(propagate_logs=propagate_logs)
  File "/usr/lib/python3.6/site-packages/mailman/core/initialize.py", line 177, in initialize_2
    config.db = getUtility(IDatabaseFactory, utility_name).create()
  File "/usr/lib/python3.6/site-packages/mailman/database/factory.py", line 50, in create
    with Lock(os.path.join(config.LOCK_DIR, 'dbcreate.lck')):
  File "/usr/lib/python3.6/site-packages/flufl/lock/_lockfile.py", line 439, in __enter__
    self.lock()
  File "/usr/lib/python3.6/site-packages/flufl/lock/_lockfile.py", line 353, in lock
    elif self._read() == self._claimfile:
  File "/usr/lib/python3.6/site-packages/flufl/lock/_lockfile.py", line 502, in _read
    with open(self._lockfile) as fp:
PermissionError: [Errno 13] Permission denied: '/var/lock/mailman/dbcreate.lck'
Actions #22

Updated by hellcp over 3 years ago

pjessen wrote:

Just some stuff I found in the mail:

I suspect we are hitting a concurrency issue with cron where all of the processes are started at once. I adjusted the crontab to see if that changes anything.

Actions #23

Updated by crameleon 12 months ago · Edited

And, did it change anything?

Actions #24

Updated by DocB 12 months ago

Search is still broken in the lists archive....I think Per gave up at the time

Actions #25

Updated by crameleon 4 months ago

  • Status changed from Feedback to Resolved
  • Assignee changed from hellcp to crameleon

Resolved since implementation of https://progress.opensuse.org/issues/159873.

Actions #26

Updated by crameleon 4 months ago

Actions

Also available in: Atom PDF