You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
@@ -68,15 +68,15 @@ Given the following log snippet (trimmed) from Baicells QRTB, the normal flow sh
68
68
1. `SetParameterValues` message followed by `GetParameterValues` - conditional, only appears if a configuration change needs to be done on the eNB side. The actual parameters that are set in order to enable transmission based on Domain Proxy grant may differ depending on the eNB model and eNB current configuration state. Please refer to your eNB device model implementation in `lte/gateway/python/magma/enodebd/devices`.
69
69
70
70
```console
71
-
2022-04-25 15:41:22,771 DEBUG State transition from <WaitGetObjectParametersState> to <set_params>`
71
+
2022-04-25 15:41:22,771 DEBUG State transition from \<WaitGetObjectParametersState\> to \<set_params\>`
72
72
2022-04-25 15:41:22,772 DEBUG Sending TR069 request to set CPE parameter values: {...}`
73
-
2022-04-25 15:41:22,773 DEBUG State transition from <SetParameterValuesState> to <wait_set_params>`
73
+
2022-04-25 15:41:22,773 DEBUG State transition from \<SetParameterValuesState\> to \<wait_set_params\>`
@@ -85,13 +85,13 @@ Given the following log snippet (trimmed) from Baicells QRTB, the normal flow sh
85
85
1. Update transmission configuration from Domain Proxy and close TR069 session - `notify_dp` is an `enodebd` transition state that calls the Domain Proxy `GetCBSDState` gRPC API. The `GetCBSDState` response contains data, which indicate whether eNB radio should be disabled/turned off or enabled together with transmission parameters. `GetCBSDState` response data is translated to eNB specific parameters that will be applied to eNB. Please refer to your eNB device model implementation in `lte/gateway/python/magma/enodebd/devices`.
86
86
87
87
```console
88
-
2022-04-25 15:41:22,917 DEBUG State transition from <WaitGetParametersState> to <end_session>
89
-
2022-04-25 15:41:22,917 DEBUG State transition from <BaicellsQRTBEndSessionState> to <notify_dp>
88
+
2022-04-25 15:41:22,917 DEBUG State transition from \<WaitGetParametersState\> to \<end_session\>
89
+
2022-04-25 15:41:22,917 DEBUG State transition from \<BaicellsQRTBEndSessionState\> to \<notify_dp\>
90
90
2022-04-25 15:41:23,046 DEBUG Updating desired config based on sas grant
`control_proxy` logs will show if gRPC calls from `enodebd` towards Domain Proxy are made. Since gRPC is binary coded, there isn't much information from tcpdump capture on this end of communication.
@@ -117,7 +117,7 @@ no transmission and the radio transmission must be disabled.
To view logs from individual Domain Proxy pods, execute `kubectl logs` with one of the pod names listed in the [previous chapter](#listing-domain-proxy-pods-in-kubernetes).
148
148
149
149
-`Radio Controller` (RC) logs:
150
-
- logs related to AGW/enodebd <-> Domain Proxy communication
150
+
- logs related to AGW/enodebd \<-\> Domain Proxy communication
151
151
- logs related to requests generated by Active Mode Controller (Domain Proxy logic, which generates appropriate SAS requests)
152
152
- logs related with Database modifications, which were the result of incoming API calls (either from AGW or AMC)
153
153
-`Configuration Controller` (CC) logs:
154
-
- logs related to Domain Proxy <-> SAS communication
154
+
- logs related to Domain Proxy \<-\> SAS communication
155
155
- logs related with Database modifications, which were the result of processing SAS responses
156
156
-`Active Mode Controller` (AMC) logs:
157
157
- logs related to internal business logic of Domain Proxy functionality
Note: Number of sites(enodeb) down, users affected, and outage duration are key indicators of service impact.
39
39
@@ -60,7 +60,7 @@ service:
60
60
Dec 5 22:25:59 magma systemd[1]: [email protected]: Main process exited, code=killed, status=11/SEGV
61
61
```
62
62
63
-
- Service crashes with a segmentation fault will create coredumps in `/var/core/` folder. Verify if coredumps have been created and obtain the coredump that matches the time of the outage/crash. Depending on the type of service crash the name of the coredump will vary. More detail in <https://magma.github.io/magma/docs/lte/dev_notes#analyzing-coredumps>
63
+
- Service crashes with a segmentation fault will create coredumps in `/var/core/` folder. Verify if coredumps have been created and obtain the coredump that matches the time of the outage/crash. Depending on the type of service crash the name of the coredump will vary. More detail in \<https://magma.github.io/magma/docs/lte/dev_notes#analyzing-coredumps\>
64
64
65
65
**5. Get the backtrace using the coredumps**. To analyze the coredumps, you need 3 requirements.
66
66
@@ -80,20 +80,20 @@ an output like this:
80
80
[Current thread is 1 (process 13887)]
81
81
(gdb) bt
82
82
#0 get_nas_specific_procedure_attach (ctxt=ctxt@entry=0x6210000d64b0) at /home/vagrant/magma/lte/gateway/c/oai/tasks/nas/nas_procedures.c:203
83
-
#1 0x000055b0080c2aa8 in emm_proc_attach_request (ue_id=ue_id@entry=2819, is_mm_ctx_new=is_mm_ctx_new@entry=true, ies=<optimized out>, ies@entry=0x608000054280)
83
+
#1 0x000055b0080c2aa8 in emm_proc_attach_request (ue_id=ue_id@entry=2819, is_mm_ctx_new=is_mm_ctx_new@entry=true, ies=\<optimized out\>, ies@entry=0x608000054280)
84
84
at /home/vagrant/magma/lte/gateway/c/oai/tasks/nas/emm/Attach.c:318
85
-
#2 0x000055b0080e5e1f in emm_recv_attach_request (ue_id=<optimized out>, originating_tai=originating_tai@entry=0x7f529e6c24d2, originating_ecgi=originating_ecgi@entry=0x7f529e6c2660,
s_tmsi@entry=..., msg=0x629000302036) at /home/vagrant/magma/lte/gateway/c/oai/tasks/nas/nas_proc.c:185
94
94
#7 0x000055b007ebbf87 in mme_app_handle_initial_ue_message (mme_app_desc_p=mme_app_desc_p@entry=0x60800000b500, initial_pP=initial_pP@entry=0x629000302026)
95
95
at /home/vagrant/magma/lte/gateway/c/oai/tasks/mme_app/mme_app_bearer.c:727
96
-
#8 0x000055b007ebab98 in handle_message (loop=<optimized out>, reader=<optimized out>, arg=<optimized out>) at /home/vagrant/magma/lte/gateway/c/oai/tasks/mme_app/mme_app_main.c:182
96
+
#8 0x000055b007ebab98 in handle_message (loop=\<optimized out\>, reader=\<optimized out\>, arg=\<optimized out\>) at /home/vagrant/magma/lte/gateway/c/oai/tasks/mme_app/mme_app_main.c:182
97
97
```
98
98
99
99
**6. Obtain event that triggered the crash**. Every time a service restarts it will generate a log file (i.e. mme.log). Inside the coredump folder you will find the log (i.e. mme.log) that was generated just before the crash. In order to understand what was the event that triggered the crash, get the last event (Attach Request, Detach, timer expiring, etc.) in the log file.
0 commit comments